title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 8
8.58M
|
---|---|---|---|---|
How can clinicians counter viral misinformation? | ac482070-0c04-4c15-80fe-e45922ae3711 | 8112629 | Health Communication[mh] | Communication about the adverse events linked to AstraZeneca’s COVID-19 vaccine has been particularly challenging. While there is enough evidence to say the AstraZeneca vaccine may cause very rare blood clots, according to Health Canada it is still very safe and effective and will remain on the market. As of April 14, Canada has reported one case of immune thrombotic thrombocytopenia among 480 000 people who have received the AstraZeneca vaccine. Other countries have reported anywhere from one case for every 40 000 doses in Denmark to one in 250 000 in the United Kingdom. Regulators are urging people to seek medical care if they experience severe headaches, shortness of breath, chest and stomach pain, leg swelling, or pinprick bruising following vaccination. Denmark has pulled the vaccine. However, public health experts have stressed that the risk of death and blood clots from COVID-19 is much higher. One in five patients hospitalized with COVID-19 develops some form of clot, and preprint research from Oxford University suggests the risk of rare blood clots in the brain is far higher in those with COVID-19 than in those who receive the AstraZeneca vaccine. In a recent Globe and Mail article, André Picard argued that the backlash to the AstraZeneca vaccine reflects a collective failure to “put risk in perspective.” “As many scientists have observed, you have a greater risk of being badly injured in a car crash while driving to your vaccination appointment than actually being harmed by a COVID-19 vaccine,” he wrote. “Drugs we take every day — Tylenol, birth-control pills, heart medications, sleeping pills — all have potentially severe side effects. We generally accept those risks, or at least don’t think of them. Why do we expect vaccines to be magically problem-free when we don’t expect that of other drugs?” According to Dr. Noni MacDonald, a professor of pediatrics and infectious disease at Dalhousie University, how doctors frame information about risk can ease or escalate patients’ anxiety. At the Canadian Immunization Conference, she gave the example of saying a vaccine is 99.99% safe versus saying it has a 0.1% chance of serious side effects. MacDonald also urged clinicians to consider how much information they share at a time. While some people will be looking for a “full meal” of research, others will only need a “bite” to satisfy their questions. It’s important not to overwhelm people with detail unless they ask for it, she said. In addition to preparing patients to recognize misinformation, Macdonald said health workers should also prepare to address coincidental illnesses and deaths that arise after vaccination. Given how many millions of people are receiving COVID-19 vaccines, “Guillain-Barré will happen, sudden infant [death syndrome] will happen, spontaneous abortions will happen,” without having anything to do with the shots, she said. “We need to make sure people don’t jump to conclusions.”
|
Force overestimation during vascular occlusion is triggered by motor system inhibition | 908a761c-8c87-465a-8e5c-d51b46c803c2 | 11906618 | Musculoskeletal System[mh] | Skeletal muscles can adapt to exercise stimuli via changes in their mechanical and metabolic properties. These changes are specific to the type of exercise stimulus; intense resistance exercise generally causes increases in muscular size and strength , whereas exercise with much smaller loads (i.e., endurance exercise) results in increased muscle oxidative capacity without a considerable increase in muscular size . However, Takarada et al. – have previously shown that vascular occlusion induces marked hypertrophy and a concomitant increase in strength even when the exercise load for endurance exercise is much lower than that expected to induce muscular hypertrophy. These enhancing effects of ischemic resistance exercise on muscular strength have been supported by many other studies in the last 20 years. Nevertheless, little is known about the neural mechanisms underlying this enhancing effect of ischemic muscle contractions on human muscular strength, except via enhancement of the hypothalamic–pituitary system state (e.g., increased plasma concentrations of human growth hormone) and enhancement of the spinal motoneuron state (e.g., the additional recruitment of fast-twitch fibers caused by muscle fatigue, with the intramuscular accumulation of metabolic subproducts such as lactate and protons) , , , . From a practical point of view, given its small mechanical stress and large effects on inducing muscular strength, the deliberate combination of low-intensity resistance exercise and moderate vascular occlusion is potentially useful not only for improving performance in athletes , , but also for accelerating muscular strength recovery in aged people (including bedridden older adults) and for improving muscular function in patients undergoing postoperative rehabilitation , . We have noted that participants report the need for greater force to lift a weight when the resistance exercise begins with vascular occlusion but not repetitive muscle contractions. Indeed, participants require more voluntary effort to exert the muscular force to lift a weight when they are undergoing resistance exercise following vascular occlusion . In this situation, the primary responsible factor for the overestimation of perceived force exertion during vascular occlusion is assumed to be the centrally generated motor command, as previously hypothesized by McCloskey – . However, the major cause of force overestimation remains unclear. In the current study, we sought to elucidate the neural mechanism of force exertion when combined with vascular occlusion, with special reference to the perception of exerted force. To do this, we used motor evoked potentials (MEPs) in response to transcranial magnetic stimulation (TMS) applied over the contralateral primary motor cortex (M1) as well as upper extremity H-reflex measurements. First, we investigated the effects of vascular occlusion (with an applied tourniquet around the upper arm at approximately 200 mmHg) within 60 s of handgrip force perception using a contralateral force-matching task, which was used to quantify the sensation of effort . In this task, force is applied to one hand (the reference) and the participants attempt to exert the same amount of force with the other hand (the indicator) without visual feedback. The relationship between the level of force applied to the reference hand and that exerted by the indicator hand provides an objective indication of the sensation of effort in the reference hand. In the present study, the force-matching task was performed at given target forces (15%, 30%, or 45% of the maximal voluntary contraction [MVC]) with or without vascular occlusion. Second, we investigated the effects of vascular occlusion on the motor system state by examining MEPs in response to TMS applied over the contralateral M1 in the resting state and during handgrip force exertion at the three predetermined target force levels. Third, we investigated the effects of vascular occlusion on spinal motoneuron excitability by examining contraction-induced H-reflexes in response to median nerve stimulation (H-responses). Of particular importance, we observed that rapid force overestimation occurred within 1 min of starting the occlusion; this was accompanied by the instantaneous suppression of both corticospinal tract and spinal motoneuron excitability. These results suggest that force overestimation during vascular occlusion may be caused by motor-related cortical areas functioning as the source of excitatory input to the M1 and/or the corticospinal tract to recruit more motoneuron drive to the muscles. This concept is supported by the finding that MVCs were unchanged between conditions with and without vascular occlusion. Our results provide the first objective evidence to suggest that rapid force overestimation during vascular occlusion is triggered by the instantaneous inhibition of both corticospinal tract and spinal motoneuron excitability.
Participants and general procedures The present study was conducted in accordance with the Declaration of Helsinki of 1964, revised in 2013. All experimental procedures complied with relevant laws and institutional guidelines and were approved by the Human Research Ethics Committee of the Faculty of Sport Sciences of Waseda University (Approval Number: 2020 − 411). Three experiments examined the influence of transient occlusion on the perception of force (the bilateral force perception experiment), on MEPs in the flexor carpi radialis (FCR) muscle in response to TMS (the unilateral TMS experiment), and on contraction-induced H-responses in the FCR muscle (the H-response experiment) (Table 1). Thirty-four healthy Japanese right-handed males (evaluated using the Edinburgh Handedness Inventory) were enrolled; 19 participated in the force perception and TMS experiments, completing one session in each experiment, and the other 15 participated in the H-response experiment, completing two sessions that were separated by at least 7 days. In each of the three experiments, all participants completed both the control and occlusion conditions. The force perception experiment was performed first, the TMS experiment was conducted within the following 7 days, and the H-response experiment was performed last. None of the participants reported neurological, psychiatric, or other contraindications to TMS . Their mean age was 20.5 ± 1.0 years old (mean ± standard deviation, range 18–22 years). All participants provided both written and verbal informed consent. The mean height of the 15 participants in the H-response experiment was 169.8 ± 7.6 cm (mean ± standard deviation, range 160–190 cm), and that in the force perception and TMS experiments was 172.6 ± 6.9 cm (mean ± standard deviation, range 164–193 cm).
The present study was conducted in accordance with the Declaration of Helsinki of 1964, revised in 2013. All experimental procedures complied with relevant laws and institutional guidelines and were approved by the Human Research Ethics Committee of the Faculty of Sport Sciences of Waseda University (Approval Number: 2020 − 411). Three experiments examined the influence of transient occlusion on the perception of force (the bilateral force perception experiment), on MEPs in the flexor carpi radialis (FCR) muscle in response to TMS (the unilateral TMS experiment), and on contraction-induced H-responses in the FCR muscle (the H-response experiment) (Table 1). Thirty-four healthy Japanese right-handed males (evaluated using the Edinburgh Handedness Inventory) were enrolled; 19 participated in the force perception and TMS experiments, completing one session in each experiment, and the other 15 participated in the H-response experiment, completing two sessions that were separated by at least 7 days. In each of the three experiments, all participants completed both the control and occlusion conditions. The force perception experiment was performed first, the TMS experiment was conducted within the following 7 days, and the H-response experiment was performed last. None of the participants reported neurological, psychiatric, or other contraindications to TMS . Their mean age was 20.5 ± 1.0 years old (mean ± standard deviation, range 18–22 years). All participants provided both written and verbal informed consent. The mean height of the 15 participants in the H-response experiment was 169.8 ± 7.6 cm (mean ± standard deviation, range 160–190 cm), and that in the force perception and TMS experiments was 172.6 ± 6.9 cm (mean ± standard deviation, range 164–193 cm).
To first examine how vascular occlusion affects the perception of handgrip force, we used the contralateral force-matching method because it allows quantification of the ongoing perception of exerted muscular force. In this method, participants are first required to generate a specified level of force by contracting the muscles of the reference limb in the presence of external feedback; they are then asked to match the subjective magnitude of this force using the muscles of the contralateral limb without the assistance of feedback.
Participants were placed in a seated position with their upper body upright. Their upper arm was inclined at about 45° in front of the body with the aid of an armrest. To measure handgrip force, the participants help handgrip devices (dimensions: approximately 154 (width) × 240 (depth) × 60 (height) mm; weight: approximately 0.65 kg; measuring range: 0–100 kg; resolution: 1/16,000 [amplifier], Takei Scientific Instruments Co., Ltd., Niigata, Japan) with a strain gauge (KFG-5–120-C1-16; Kyowa Electronic Instruments Co. Ltd., Tokyo, Japan) in their right and left hands. The measured force was amplified (AD240-A; TEAC Instruments Co., Kawasaki, Japan), digitized (4 kHz), filtered using a Butterworth filter with a cutoff frequency of 10 Hz, and input into a visual feedback system (Panasonic CF-S9) with a display that showed the participants both the force exerted by their reference (right) hand and the predetermined target force level. To begin, the maximum voluntary force of the right hand without vascular occlusion was measured. Because the force perception experiment involved participants with no experience in force exertion combined with vascular occlusion, participants performed three brief MVCs (1–2 s duration) on a cue given by an experimenter (“one, two, three, squeeze”) with a 60-s inter-squeeze interval. The mean value was used as the maximal voluntary handgrip force, the value of which was subsequently used to calculate the three target force levels (15%, 30%, and 45% of the MVC). Under the experimental task, the participants were instructed to match the exerted force to the predetermined target force levels (the contralateral force-matching task). The participants were given at least 5 min of rest to eliminate the influence of postexercise facilitation after MVC, in accordance with a previous study . The maximal voluntary handgrip force was re-measured both at the end of the experimental task (post-MVC) and in the occluded condition after 3 min (post-MVC with occlusion). Each participant performed a force-matching task six times at each of the three target force levels without (control) or with (occluded) vascular occlusion, followed by a 3-min rest period. Six force-matching tasks (i.e., three target force levels with or without occlusion) were performed; thus, the numbers of participants in the 12 combinations of execution order were as follows: 15%/30%/45% MVC without occlusion: 2; 15%/45%/30% MVC without occlusion: 2; 30%/15%/45% MVC without occlusion: 2; 30%/45%/15% MVC without occlusion: 1; 45%/15%/30% MVC without occlusion: 1; 45%/30%/15% MVC without occlusion: 1; 15%/30%/45% MVC with occlusion: 1; 15%/45%/30% MVC with occlusion: 2; 30%/15%/45% MVC with occlusion: 2; 30%/45%/15% MVC with occlusion: 0; 45%/15%/30% MVC with occlusion: 2; 45%/30%/15% MVC with occlusion: 1.
During the experimental task, only the force exerted in the reference hand was displayed on a personal computer (PC) monitor (Panasonic CF-S9) in the aforementioned visual system. Participants were seated in front of a table facing the monitor and were asked to align the force exerted by the reference hand with a predetermined target force indicated on the monitor using visual feedback. A start-of-trial cue (“one, two, three, right squeeze”) was provided by an experimenter. After approximately 3 s, the experimenter provided another verbal signal (“one, two, three, left squeeze”) and participants were required to squeeze the left handgrip device with their left (indicator) hand at a force level that matched the reference hand without visual feedback (the bilateral force-matching task) (Fig. a). When the participant was satisfied that they were applying a level of force with the indicator hand that matched that of the reference hand, they provided a verbal signal (“yes”) to the experimenter. Visual feedback for the reference hand remained on the display throughout this period. An end-of-trial cue was provided by an experimenter approximately 7 s after the start of the reference-hand force exertion. The trial was performed six times with 6-s rest periods at each given target force with or without vascular occlusion, followed by a 3-min rest period. Vascular occlusion was produced using a tourniquet, which was attached at the proximal end of the right upper arm. Once the participants confirmed that they felt no pain using the Verbal Rating Scale (VRS) a pressure of approximately 200 mmHg began to be applied by pneumatic inflation approximately 10 s before the handgrip contractions started. This pressure was maintained throughout three handgrip contractions at each given target force with vascular occlusion and was released immediately after the end of the three handgrip contractions. The vascular occlusion in one force-matching task thus lasted for approximately 45 s. We confirmed that no participants experienced pins and needles in their right arm immediately after vascular occlusion. Before the force-matching task, all participants received the task instructions and practiced the handgrip force exertion until they were satisfied that they were able to apply a level of force with their indicator hand that matched that of the reference hand within approximately 1 s of beginning the handgrip force exertion of the indicator hand.
Figure b, c show examples of the force data collected during the task. Because our preliminary work showed that it took approximately 1 s to perceive whether the forces exerted by both hands were the same after participants had fully practiced, the data used for the analysis were averaged over 500 ms, starting 1.5–2 s after the force was first applied to the handgrip by the indicator hand. Differences in exerted force between the reference and indicator hands were normalized as the matching value (MV [%]) as follows: MV (%) = (handgrip force of the indicator hand − handgrip force of the reference hand) / handgrip force of the reference hand × 100. Unilateral TMS experiment To elucidate the possible neural mechanisms underlying the effects of vascular occlusion on perceived force exertion, we investigated the effects of transient vascular occlusion of the upper arm on the motor system state during both a resting state and force exertion.
To elucidate the possible neural mechanisms underlying the effects of vascular occlusion on perceived force exertion, we investigated the effects of transient vascular occlusion of the upper arm on the motor system state during both a resting state and force exertion.
Participants were placed in a seated position with their upper body upright. Their upper arm was inclined at about 45° in front of the body with the aid of an armrest and their forearm was supinated. Before the experimental task, 10 TMS stimuli were applied with an interstimulus interval of approximately 5 s in the presence or absence of vascular occlusion in the resting state, with more than 3 min of rest between the conditions without and with vascular occlusion (Fig. a). The experimental procedure was the same as that used in the following experimental task (Fig. b), with the exception of the handgrip muscular contractions. Force measurement was performed using the same measurement system as that used in the force perception experiment. Six unilateral TMS tasks (i.e., three target force levels with or without occlusion) were performed in the same order as in the bilateral force perception experiment (see Bilateral force perception experiment). Monophasic TMS pulses were administered to the left M1 (controlling the right hand) via a stimulator (M2002, Magstim, Whitland, UK) using a double-figure-eight-shaped coil (4150-00 Double 70-mm Alpha Coil, Magstim) with a maximum magnetic field strength of 1.55 T. Each participant sat upright with their elbows bent in front of them and their hands resting on their thighs. The M1 of each participant was mapped extensively using 5–10 stimuli, with the current direction of the coil placed perpendicular to the anatomically defined central sulcus, to find the area evoking the largest response from the FCR muscle (the hot spot). The TMS coil was then positioned over the hot spot of the left M1, which was determined as the area with the lowest resting motor threshold. This was defined as the lowest stimulus intensity that elicited MEPs with peak-to-peak amplitudes greater than 50 µV in at least 5 of 10 trials . The handle of the coil was pointed backward (approximately 45° laterally from the midsagittal line). During MEP recordings, participants were asked to remain in a resting state. The coil position was stabilized throughout the experiment using a coil stand constructed from multiple products (Manfrotto Distribution KK, Tokyo, Japan); however, we did not use a neuronavigation system to record the coil position. The optimal scalp position of M1 was marked directly onto the scalp with a black waterproof marker pen. The positioned coil was monitored continuously to maintain its consistent positioning throughout the experiment, and resting motor thresholds were 58.8% ± 8.7% (mean ± standard deviation, range 44–80%) of the maximum stimulator output. Before the experimental tasks, the stimulus intensity was increased in 5–10% increments from 44 to 95% of the maximum stimulator output to determine the applied stimulus intensity for the unilateral force-matching task. The stimulus intensity needed to be sufficiently high for a single MEP waveform to be discriminated from the background electromyography (bEMG) activity ; two different muscular contraction intensities (approximately 75% and 100% of the MVC) were therefore adopted. The applied stimulus intensity was 70.1% ± 2.4% of the maximum stimulator output, equivalent to 115.2% ± 1.4% of the resting motor threshold. The applied stimulus intensity for each participant was constant in all conditions to allow the obtained measurements to be analyzed and compared within participants. Stimulation was manually delivered once over the target site at 1–1.5 s after the participants began to exert hand-grip forces during each brief contraction (3–4 s in duration), with a 3-s inter-squeeze interval in the unilateral force-matching task (Fig. a, b). Six measurement sessions (three target force levels with or without occlusion) were performed one time only within a single measurement session (Fig. b). Thus, the MEP was recorded six times for each measurement session and was recorded 36 times for each participant throughout the unilateral force-matching task. Surface EMG was measured from the right FCR muscle via bipolar silver surface electrodes (10 mm in diameter, Nihon Kohden Co., Tokyo, Japan), with a constant interelectrode distance of 20 mm. The skin overlying the identified muscles was cleaned with alcohol pads prior to electrode placement. Signals (analysis time of 30 ms) were amplified using a bandpass filter (15 Hz–10 kHz) and digitized (MEG-6108; Nihon Kohden Co.) at a sampling rate of 4 kHz.
During the experimental task, only the force exerted in the reference hand was displayed on a PC monitor (Panasonic CF-S9) in the same visual system as that used in the force perception experiment. Participants were seated in front of a table facing the monitor and were asked to align the force exerted by the reference hand with a predetermined target force indicated on the monitor using visual feedback (the unilateral force-matching task). The three predetermined target force levels were the same as those in the force perception experiment. A start-of-trial cue (“one, two, three, right squeeze”) was provided by an experimenter. During the measurement session, the participant performed the unilateral force-matching task six times (approximately 3 s in duration), followed by an approximately 3-s rest period, at each of the three target force levels (15%, 30%, or 45% of the MVC) with or without vascular occlusion (Fig. b). A 3-min rest period was maintained between each measurement session. Under the occluded condition, the tourniquet was inflated to approximately 200 mmHg to restrict blood flow approximately 10 s before the handgrip contractions started. The vascular occlusion in one unilateral force-matching task thus lasted for approximately 48 s. The execution order was the same as that in the force perception experiment. Six measurement sessions (three target force levels with or without occlusion) were performed one time only within a single measurement session. Analysis To estimate the relative levels of responsiveness of the M1 to voluntary drive during voluntary handgrip contractions, the force produced by the superimposed twitch (superimposed twitch force) following TMS was expressed as a fraction of the pre-stimulus force at each TMS (Fig. a), in accordance with a previous study . To measure bEMG, a rectified EMG signal with a period of 100 ms before TMS was integrated, with the force kept at the maximum force level (Fig. b). There were no trials with bEMG above 0.1 mV for the 100ms period. We calculated the averaged waveform of the MEP under the unilateral force-matching task (an average of six recordings) (see Procedure and MEP measurement for details). We then measured the latency from stimulus onset to the averaged MEP onset determined by visual inspection, as well as the peak-to-peak amplitude of each MEP from 10 ms to 40 ms after TMS, the size of which reflects corticospinal excitability , (Fig. c). These analyses were performed using analysis software (LabChart 7.3.8; ADInstruments, Tokyo, Japan). The silent period duration was taken as the time interval from the stimulus artifact to the return of continuous EMG , (Fig. c). Because it was difficult to determine the end of the silent period (because voluntary EMG activity recovers gradually rather than abruptly), the end of the silent period was determined as the moment at which the corresponding rectified EMG activity reached a value within two standard deviations of the rectified EMG signal in the period 100 ms before TMS , , with careful visual inspection. However, two participants were excluded because the end of their silent period was relatively unclear— in one participant, a small burst of EMG occurred before the resumption of continuous activity, making it impossible to detect the silent period duration. These two exclusions meant that the data from just 17 participants were used for this experiment, and the trials of the 17 participants for the force perception experiment were adopted as the analysis targets. H-response experiment To examine how vascular occlusion affects spinal motoneurons during force exertion, we investigated the effects of transient vascular occlusion of the upper arm on the H-response during constant isometric contraction. Procedure and H-response measurement Participants were seated comfortably in a chair with their right forearm resting on a pillow. The elbow and shoulder were flexed at 100° and 15°, respectively. During recordings, the forearm was supinated and the wrist was flexed at approximately 15°. We tried to position the arms the same way across participants. The H-responses and motor responses were recorded with or without vascular occlusion at a pressure of 200 mmHg. Given that a previous report suggested that it is difficult to obtain H-responses from the FCR muscle in the absence of facilitation , each measurement was performed first at rest with or without vascular occlusion, and then with facilitation (a moderate voluntary contraction against resistance) with or without vascular occlusion within 4 weeks. In the facilitation condition, participants were asked to hold a 1-kg weight and maintain a constant background isometric contraction of the right FCR. Before these measurements, we estimated the effects of this weight on the neural drive to skeletal muscle, measured from EMG activity. In three healthy subjects (three males; 20–22 years old), the 1-kg weight produced an EMG signal that was approximately 4.1–5.8% of that recorded during the MVC of individual forearm muscles (i.e., the FCR). Once the electrodes were applied (as described in the following paragraph), approximately 5 min of practice trials were used to familiarize the participants with the H-response stimulation and recording procedures. In all cases, the median nerve was stimulated once every 5 s, beginning at an intensity below the H-response threshold and increasing until the maximal motor (M)-response (M max ) was reached. To record the H-responses (10 traces), the stimulation intensity was set at an intensity that evoked reflexes of 5–10% of the M max amplitude , on the ascending part of the recruitment curve . The same stimulation intensity was repeated 10 times in each recording condition with or without vascular occlusion. The duration of each recording condition was approximately 50 s, followed by a 60-s rest. A pressure of 200 mmHg started to be applied by pneumatic inflation approximately 10 s before each recording condition; this pressure was maintained throughout each recording and was released immediately after recording ended. The vascular occlusion in one recording condition thus lasted for approximately 60 s. Electrical stimuli were delivered by a stimulator (DC-940B, Nihon Kohden Co.) and an isolator (SM-940B, Nihon Kohden Co.) at each level of intensity at a rate of 0.2 Hz (duration, 1 ms)—which did not result in H-response depression when the H-responses were elicited during a background contraction of the FCR muscle —to the right median nerve using flat-surfaced disk electrodes (12 mm wide, 46 mm long), with the cathode 24 mm proximal to the anode (9 mm in diameter). The electrodes were placed proximal to the antecubital fossa, approximately one-third of the distance from the lateral epicondyle to the bicep tendon , . After the appropriate stimulating and recording sites were determined, we marked the electrode locations with permanent marker to ensure that electrodes were placed in the same position across all stimulation intensities. The H-responses and motor responses in surface EMG were obtained from the right FCR muscle via bipolar silver surface electrodes (10 mm in diameter, Nihon Kohden Co.) attached to the skin with electroencephalography paste, with a constant interelectrode distance of 20 mm. FCR muscle bellies were identified by palpation during manually resisted wrist flexion. The skin overlying the identified muscles was cleaned with alcohol pads prior to electrode placement. A reference electrode was fixed on the skin overlying the lateral epicondyle near the elbow joint of the right arm. Signals (analysis time of 5 ms) were amplified using a bandpass filter (15 Hz–3 kHz) and digitized (MEG-6108; Nihon Kohden Co.) at a sampling rate of 10 kHz before being stored in the computer memory of a PC (LATITUDE D520, Dell Technologies, Round Rock, TX, USA). Wave data were inspected online and stored in the hard disk of the PC for the subsequent analysis of latencies and peak-to-peak amplitudes of H-responses and the M max of the FCR. The M-response was elicited by the supramaximal stimulation of the median nerve at the antecubital fossa and was recorded with and without arterial occlusion. To confirm the effects of transient vascular occlusion on the maximum voluntary handgrip force, we measured the maximal voluntary handgrip force in participants with experience of handgrip force exertion with vascular occlusion within 2 weeks after the H-response measurement. We asked these participants to perform three brief MVCs (1–2 s in duration) with the right hand with or without vascular occlusion on a cue given by an experimenter (“one, two, three, squeeze”) with a 60-s inter-squeeze interval, as in the force perception experiment. Seven of the 15 participants first performed the brief MVCs without vascular occlusion, and then (after a 3-min rest period) performed them with vascular occlusion. The other eight participants performed the brief MVCs in reverse order. Vascular occlusion was produced using a tourniquet, which was attached at the proximal end of the right upper arm. A pressure of 200 mmHg started to be applied by pneumatic inflation approximately 10 s before each handgrip contraction started; this pressure was released immediately after the end of the handgrip contraction. Analysis The magnitudes of FCR H-responses and M-responses were evaluated by the peak-to-peak amplitudes of the EMG responses; these were measured in response to at least eight stimuli applied to the median nerve, and were averaged at each stimulation intensity for each participant , . The latencies of these averaged waves were measured from stimulus artifact to the start of each evoked action potential (Fig. a). The analyses were performed using analysis software (LabChart 8). The peak-to-peak amplitude values were also expressed as a proportion of the M max values (Fig. b). All signals were visually inspected to ensure that the measurements obtained from the software were accurate. During the EMG measurements, force was exerted to hold a 1-kg weight and maintain a constant background isometric contraction of the right FCR. The data used for the analysis were averaged over 500 ms after the peak was reached, and the mean value of three brief MVCs was used as the maximal voluntary handgrip force. Statistics and power analysis Significant differences in MEP amplitude and latency in a resting state, the magnitude and latency of the FCR H-response and M-response, and the maximum voluntary handgrip force with and without vascular occlusion were investigated using paired t -tests. Differences in maximal voluntary force among three MVCs (within-participant factors: MVC, post-MVC, and post-MVC with occlusion) were determined using one-way analysis of variance (ANOVA). Matching values, superimposed twitch forces, MEP amplitudes, silent period durations, and bEMG were analyzed using repeated-measures two-way ANOVA with within-participant factors of Intensity (15%, 30%, and 45% of the MVC), and Condition (with or without vascular occlusion). Greenhouse–Geisser corrections were applied when appropriate to adjust for non-sphericity, and degrees of freedom were changed using a correction coefficient. Post hoc multiple comparisons were performed using Holm’s method. Data were analyzed using Jeffreys’s Amazing Statistics Program (JASP ver. 0.17.2.1). A significance threshold of p < 0.05 was used for all tests. When the results of the main effect and interaction of the ANOVA are presented, Cohen’s d and η are also shown as an effect size index. The values of the effect size index (Cohen’s d) were interpreted as 0.20, 0.50, and 0.80 for small, medium, and large effects, respectively . η is used to denote eta squared as an effect size index, the values of which were interpreted as 0.10, 0.25, and 0.40 for small, medium, and large effects, respectively . We therefore designed the experiment to have 80% power for detecting the effect size (0.25, η ), using a significance level of 5%. We used G*Power 3.1 (Institut für Experimentelle Psychologie, Düsseldorf, Germany) to compute the required total sample size of the current study by conducting a repeated-measures ANOVA with within-participant factors, using 95% power (1 − β error probability). The computed required sample size was 14 participants for each experimental group. Unless otherwise noted as the standard deviation, data are expressed as the mean ± standard error of the mean.
To estimate the relative levels of responsiveness of the M1 to voluntary drive during voluntary handgrip contractions, the force produced by the superimposed twitch (superimposed twitch force) following TMS was expressed as a fraction of the pre-stimulus force at each TMS (Fig. a), in accordance with a previous study . To measure bEMG, a rectified EMG signal with a period of 100 ms before TMS was integrated, with the force kept at the maximum force level (Fig. b). There were no trials with bEMG above 0.1 mV for the 100ms period. We calculated the averaged waveform of the MEP under the unilateral force-matching task (an average of six recordings) (see Procedure and MEP measurement for details). We then measured the latency from stimulus onset to the averaged MEP onset determined by visual inspection, as well as the peak-to-peak amplitude of each MEP from 10 ms to 40 ms after TMS, the size of which reflects corticospinal excitability , (Fig. c). These analyses were performed using analysis software (LabChart 7.3.8; ADInstruments, Tokyo, Japan). The silent period duration was taken as the time interval from the stimulus artifact to the return of continuous EMG , (Fig. c). Because it was difficult to determine the end of the silent period (because voluntary EMG activity recovers gradually rather than abruptly), the end of the silent period was determined as the moment at which the corresponding rectified EMG activity reached a value within two standard deviations of the rectified EMG signal in the period 100 ms before TMS , , with careful visual inspection. However, two participants were excluded because the end of their silent period was relatively unclear— in one participant, a small burst of EMG occurred before the resumption of continuous activity, making it impossible to detect the silent period duration. These two exclusions meant that the data from just 17 participants were used for this experiment, and the trials of the 17 participants for the force perception experiment were adopted as the analysis targets.
To examine how vascular occlusion affects spinal motoneurons during force exertion, we investigated the effects of transient vascular occlusion of the upper arm on the H-response during constant isometric contraction.
Participants were seated comfortably in a chair with their right forearm resting on a pillow. The elbow and shoulder were flexed at 100° and 15°, respectively. During recordings, the forearm was supinated and the wrist was flexed at approximately 15°. We tried to position the arms the same way across participants. The H-responses and motor responses were recorded with or without vascular occlusion at a pressure of 200 mmHg. Given that a previous report suggested that it is difficult to obtain H-responses from the FCR muscle in the absence of facilitation , each measurement was performed first at rest with or without vascular occlusion, and then with facilitation (a moderate voluntary contraction against resistance) with or without vascular occlusion within 4 weeks. In the facilitation condition, participants were asked to hold a 1-kg weight and maintain a constant background isometric contraction of the right FCR. Before these measurements, we estimated the effects of this weight on the neural drive to skeletal muscle, measured from EMG activity. In three healthy subjects (three males; 20–22 years old), the 1-kg weight produced an EMG signal that was approximately 4.1–5.8% of that recorded during the MVC of individual forearm muscles (i.e., the FCR). Once the electrodes were applied (as described in the following paragraph), approximately 5 min of practice trials were used to familiarize the participants with the H-response stimulation and recording procedures. In all cases, the median nerve was stimulated once every 5 s, beginning at an intensity below the H-response threshold and increasing until the maximal motor (M)-response (M max ) was reached. To record the H-responses (10 traces), the stimulation intensity was set at an intensity that evoked reflexes of 5–10% of the M max amplitude , on the ascending part of the recruitment curve . The same stimulation intensity was repeated 10 times in each recording condition with or without vascular occlusion. The duration of each recording condition was approximately 50 s, followed by a 60-s rest. A pressure of 200 mmHg started to be applied by pneumatic inflation approximately 10 s before each recording condition; this pressure was maintained throughout each recording and was released immediately after recording ended. The vascular occlusion in one recording condition thus lasted for approximately 60 s. Electrical stimuli were delivered by a stimulator (DC-940B, Nihon Kohden Co.) and an isolator (SM-940B, Nihon Kohden Co.) at each level of intensity at a rate of 0.2 Hz (duration, 1 ms)—which did not result in H-response depression when the H-responses were elicited during a background contraction of the FCR muscle —to the right median nerve using flat-surfaced disk electrodes (12 mm wide, 46 mm long), with the cathode 24 mm proximal to the anode (9 mm in diameter). The electrodes were placed proximal to the antecubital fossa, approximately one-third of the distance from the lateral epicondyle to the bicep tendon , . After the appropriate stimulating and recording sites were determined, we marked the electrode locations with permanent marker to ensure that electrodes were placed in the same position across all stimulation intensities. The H-responses and motor responses in surface EMG were obtained from the right FCR muscle via bipolar silver surface electrodes (10 mm in diameter, Nihon Kohden Co.) attached to the skin with electroencephalography paste, with a constant interelectrode distance of 20 mm. FCR muscle bellies were identified by palpation during manually resisted wrist flexion. The skin overlying the identified muscles was cleaned with alcohol pads prior to electrode placement. A reference electrode was fixed on the skin overlying the lateral epicondyle near the elbow joint of the right arm. Signals (analysis time of 5 ms) were amplified using a bandpass filter (15 Hz–3 kHz) and digitized (MEG-6108; Nihon Kohden Co.) at a sampling rate of 10 kHz before being stored in the computer memory of a PC (LATITUDE D520, Dell Technologies, Round Rock, TX, USA). Wave data were inspected online and stored in the hard disk of the PC for the subsequent analysis of latencies and peak-to-peak amplitudes of H-responses and the M max of the FCR. The M-response was elicited by the supramaximal stimulation of the median nerve at the antecubital fossa and was recorded with and without arterial occlusion. To confirm the effects of transient vascular occlusion on the maximum voluntary handgrip force, we measured the maximal voluntary handgrip force in participants with experience of handgrip force exertion with vascular occlusion within 2 weeks after the H-response measurement. We asked these participants to perform three brief MVCs (1–2 s in duration) with the right hand with or without vascular occlusion on a cue given by an experimenter (“one, two, three, squeeze”) with a 60-s inter-squeeze interval, as in the force perception experiment. Seven of the 15 participants first performed the brief MVCs without vascular occlusion, and then (after a 3-min rest period) performed them with vascular occlusion. The other eight participants performed the brief MVCs in reverse order. Vascular occlusion was produced using a tourniquet, which was attached at the proximal end of the right upper arm. A pressure of 200 mmHg started to be applied by pneumatic inflation approximately 10 s before each handgrip contraction started; this pressure was released immediately after the end of the handgrip contraction.
The magnitudes of FCR H-responses and M-responses were evaluated by the peak-to-peak amplitudes of the EMG responses; these were measured in response to at least eight stimuli applied to the median nerve, and were averaged at each stimulation intensity for each participant , . The latencies of these averaged waves were measured from stimulus artifact to the start of each evoked action potential (Fig. a). The analyses were performed using analysis software (LabChart 8). The peak-to-peak amplitude values were also expressed as a proportion of the M max values (Fig. b). All signals were visually inspected to ensure that the measurements obtained from the software were accurate. During the EMG measurements, force was exerted to hold a 1-kg weight and maintain a constant background isometric contraction of the right FCR. The data used for the analysis were averaged over 500 ms after the peak was reached, and the mean value of three brief MVCs was used as the maximal voluntary handgrip force.
Significant differences in MEP amplitude and latency in a resting state, the magnitude and latency of the FCR H-response and M-response, and the maximum voluntary handgrip force with and without vascular occlusion were investigated using paired t -tests. Differences in maximal voluntary force among three MVCs (within-participant factors: MVC, post-MVC, and post-MVC with occlusion) were determined using one-way analysis of variance (ANOVA). Matching values, superimposed twitch forces, MEP amplitudes, silent period durations, and bEMG were analyzed using repeated-measures two-way ANOVA with within-participant factors of Intensity (15%, 30%, and 45% of the MVC), and Condition (with or without vascular occlusion). Greenhouse–Geisser corrections were applied when appropriate to adjust for non-sphericity, and degrees of freedom were changed using a correction coefficient. Post hoc multiple comparisons were performed using Holm’s method. Data were analyzed using Jeffreys’s Amazing Statistics Program (JASP ver. 0.17.2.1). A significance threshold of p < 0.05 was used for all tests. When the results of the main effect and interaction of the ANOVA are presented, Cohen’s d and η are also shown as an effect size index. The values of the effect size index (Cohen’s d) were interpreted as 0.20, 0.50, and 0.80 for small, medium, and large effects, respectively . η is used to denote eta squared as an effect size index, the values of which were interpreted as 0.10, 0.25, and 0.40 for small, medium, and large effects, respectively . We therefore designed the experiment to have 80% power for detecting the effect size (0.25, η ), using a significance level of 5%. We used G*Power 3.1 (Institut für Experimentelle Psychologie, Düsseldorf, Germany) to compute the required total sample size of the current study by conducting a repeated-measures ANOVA with within-participant factors, using 95% power (1 − β error probability). The computed required sample size was 14 participants for each experimental group. Unless otherwise noted as the standard deviation, data are expressed as the mean ± standard error of the mean.
Bilateral force perception experiment The participants were asked to determine the magnitude of the handgrip force exerted by the reference hand by producing a brief matching contraction with the indicator hand to numerically estimate the subjective effort required to exert the handgrip force of the reference hand. There were no significant differences in the maximal voluntary handgrip force before (321.4 ± 19.5 N) and after (327.4 ± 17.6 N) the contralateral force-matching task, or when combined with vascular occlusion (326.3 ± 14.4 N; F [1.47, 23.52] = 0.58; p = 0.51). Figure shows the MVs in the control and occluded conditions at each level of target force; these were calculated as the difference in exerted handgrip force between the reference and indicator hands. The mean MVs in the control condition at the three different target levels were 54.6% ± 11.4%, 26.3% ± 9.1%, and 14.4% ± 8.3%, respectively, and those in the occluded condition were 83.2% ± 10.8%, 40.3% ± 8.6%, and 33.7% ± 8.4%, respectively. Two-way ANOVA revealed significant main effects of Intensity ( F [1.56, 25.01] = 24.2; p = 5.26 × 10 –6 ; effect size: η 2 = 0.44) and Condition ( F [1, 16] = 33.8; p = 2.61 × 10 –5 ), but no significant interaction between Intensity and Condition ( F [1.69, 27.15] = 2.81; p = 0.085) (Fig. ). These results indicate that, when combined with arterial occlusion, the handgrip force exerted by the indicator hand is significantly increased at all levels of target force during handgrip contractions of the reference hand. Together, these findings suggest that vascular occlusion leads to the overestimation of exerted force. Unilateral TMS experiment Resting-state MEP amplitudes were significantly lower in the occluded condition (179.7 ± 32.6 µV) than in the control condition (232.4 ± 40.0 µV; t [17] = 3.97, Cohen’s d = 0.96; p = 0.001, paired t -test) (Fig. a, b). However, there were no significant differences in MEP amplitudes between the two conditions during force exertion, as shown in Table 2; Fig. a. Two-way ANOVA revealed no significant main effects of Intensity ( F [1.03, 16.57] = 2.43; p = 0.13) or Condition ( F [1, 16] = 0.07; p = 0.79), and no interaction between Intensity and Condition ( F [1.34, 21.48] = 0.082; p = 0.84). There were no significant differences in MEP latencies between the two conditions during resting state (control: 14.4 ± 0.28 ms; occluded: 14.6 ± 0.27 ms; t [17] = − 1.52, p = 0.14) or force exertion (main effects of Intensity [ F (1.52, 27.39) = 6.72; p = 0.007], Condition [ F (1, 18) = 0.02; p = 0.88], and interaction between Intensity and Condition [ F (1.91, 34.47) = 0.23; p = 0.78]) (15% MVC, control: 12.1 ± 0.23 ms, occluded: 12.2 ± 0.28 ms; 30% MVC, control: 12.0 ± 0.53 ms, occluded: 12.1 ± 0.22 ms; 45% MVC, control: 11.8 ± 0.24 ms, occluded: 11.8 ± 0.23 ms). The silent period duration was longer with vascular occlusion than without vascular occlusion. Two-way ANOVA revealed no significant main effect of Intensity ( F [1.37, 21.92] = 0.305; p = 0.661) and no interaction between Intensity and Condition ( F [1.54, 24.06] = 0.073; p = 0.88). However, there was a significant main effect of Condition ( F [1, 16] = 17.51; p = 7.00 × 10 –4 ; effect size: η 2 = 0.18) (Fig. b). bEMG revealed no significant changes among the conditions during force exertion (main effects of Intensity [ F (1.07, 17.14) = 53.4; p = 8.31 × 10 –7 ], Condition [ F (1, 16) = 2.63; p = 0.12], and interaction between Intensity and Condition [ F (1.10, 17.63) = 0.050; p = 0.84]) (Fig. c). Superimposed twitch force revealed no significant changes among the conditions during force exertion (main effects of Intensity [ F (1.49, 23.9) = 28.4; p = 2.28 × 10 –6 ], Condition [ F (1, 16) = 0.14; p = 0.71], and interaction between Intensity and Condition [ F (1.38, 22.0) = 0.20; p = 0.73]) (Table 2). H-response experiment The amplitudes of contraction-induced H-responses of the FCR were significantly lower in the occluded condition (0.51 ± 0.16 mV) than in the control condition (0.70 ± 0.18 µV; t [14] = 3.31, Cohen’s d = 0.53; p = 0.005, paired t -test) (Table 3). However, there were no significant differences in M-response latencies ( t [14] = 1.67, p = 0.11), M-response amplitudes ( t [14] = 1.13, p = 0.27), M max latencies ( t [14] = 1.52, p = 0.15), M max amplitudes ( t [14] = − 0.43, p = 0.67), H-response latencies ( t [14] = 1.03, p = 0.33), or bEMG ( t [14] = 1.22, p = 0.24) between the two conditions (Table 3; Fig. b). The H-response/M max was significantly decreased ( t [14] = 3.88, Cohen’s d = 0.46; p = 0.002) with no changes in M-response amplitudes, expressed as a proportion of M max ( t [14] = 1.76, p = 0.10) (Fig. b). There were no significant differences in maximal voluntary handgrip force between the with (325.7 ± 19.3 N) and without (331.0 ± 18.0 N; t [14] = − 1.15, p = 0.26) vascular occlusion conditions. Together with the results of maximal voluntary handgrip force in the force perception experiment, our findings indicate that transient vascular occlusion has no effect on maximal voluntary handgrip force, regardless of prior experience of handgrip force exertion with vascular occlusion.
The participants were asked to determine the magnitude of the handgrip force exerted by the reference hand by producing a brief matching contraction with the indicator hand to numerically estimate the subjective effort required to exert the handgrip force of the reference hand. There were no significant differences in the maximal voluntary handgrip force before (321.4 ± 19.5 N) and after (327.4 ± 17.6 N) the contralateral force-matching task, or when combined with vascular occlusion (326.3 ± 14.4 N; F [1.47, 23.52] = 0.58; p = 0.51). Figure shows the MVs in the control and occluded conditions at each level of target force; these were calculated as the difference in exerted handgrip force between the reference and indicator hands. The mean MVs in the control condition at the three different target levels were 54.6% ± 11.4%, 26.3% ± 9.1%, and 14.4% ± 8.3%, respectively, and those in the occluded condition were 83.2% ± 10.8%, 40.3% ± 8.6%, and 33.7% ± 8.4%, respectively. Two-way ANOVA revealed significant main effects of Intensity ( F [1.56, 25.01] = 24.2; p = 5.26 × 10 –6 ; effect size: η 2 = 0.44) and Condition ( F [1, 16] = 33.8; p = 2.61 × 10 –5 ), but no significant interaction between Intensity and Condition ( F [1.69, 27.15] = 2.81; p = 0.085) (Fig. ). These results indicate that, when combined with arterial occlusion, the handgrip force exerted by the indicator hand is significantly increased at all levels of target force during handgrip contractions of the reference hand. Together, these findings suggest that vascular occlusion leads to the overestimation of exerted force.
Resting-state MEP amplitudes were significantly lower in the occluded condition (179.7 ± 32.6 µV) than in the control condition (232.4 ± 40.0 µV; t [17] = 3.97, Cohen’s d = 0.96; p = 0.001, paired t -test) (Fig. a, b). However, there were no significant differences in MEP amplitudes between the two conditions during force exertion, as shown in Table 2; Fig. a. Two-way ANOVA revealed no significant main effects of Intensity ( F [1.03, 16.57] = 2.43; p = 0.13) or Condition ( F [1, 16] = 0.07; p = 0.79), and no interaction between Intensity and Condition ( F [1.34, 21.48] = 0.082; p = 0.84). There were no significant differences in MEP latencies between the two conditions during resting state (control: 14.4 ± 0.28 ms; occluded: 14.6 ± 0.27 ms; t [17] = − 1.52, p = 0.14) or force exertion (main effects of Intensity [ F (1.52, 27.39) = 6.72; p = 0.007], Condition [ F (1, 18) = 0.02; p = 0.88], and interaction between Intensity and Condition [ F (1.91, 34.47) = 0.23; p = 0.78]) (15% MVC, control: 12.1 ± 0.23 ms, occluded: 12.2 ± 0.28 ms; 30% MVC, control: 12.0 ± 0.53 ms, occluded: 12.1 ± 0.22 ms; 45% MVC, control: 11.8 ± 0.24 ms, occluded: 11.8 ± 0.23 ms). The silent period duration was longer with vascular occlusion than without vascular occlusion. Two-way ANOVA revealed no significant main effect of Intensity ( F [1.37, 21.92] = 0.305; p = 0.661) and no interaction between Intensity and Condition ( F [1.54, 24.06] = 0.073; p = 0.88). However, there was a significant main effect of Condition ( F [1, 16] = 17.51; p = 7.00 × 10 –4 ; effect size: η 2 = 0.18) (Fig. b). bEMG revealed no significant changes among the conditions during force exertion (main effects of Intensity [ F (1.07, 17.14) = 53.4; p = 8.31 × 10 –7 ], Condition [ F (1, 16) = 2.63; p = 0.12], and interaction between Intensity and Condition [ F (1.10, 17.63) = 0.050; p = 0.84]) (Fig. c). Superimposed twitch force revealed no significant changes among the conditions during force exertion (main effects of Intensity [ F (1.49, 23.9) = 28.4; p = 2.28 × 10 –6 ], Condition [ F (1, 16) = 0.14; p = 0.71], and interaction between Intensity and Condition [ F (1.38, 22.0) = 0.20; p = 0.73]) (Table 2).
The amplitudes of contraction-induced H-responses of the FCR were significantly lower in the occluded condition (0.51 ± 0.16 mV) than in the control condition (0.70 ± 0.18 µV; t [14] = 3.31, Cohen’s d = 0.53; p = 0.005, paired t -test) (Table 3). However, there were no significant differences in M-response latencies ( t [14] = 1.67, p = 0.11), M-response amplitudes ( t [14] = 1.13, p = 0.27), M max latencies ( t [14] = 1.52, p = 0.15), M max amplitudes ( t [14] = − 0.43, p = 0.67), H-response latencies ( t [14] = 1.03, p = 0.33), or bEMG ( t [14] = 1.22, p = 0.24) between the two conditions (Table 3; Fig. b). The H-response/M max was significantly decreased ( t [14] = 3.88, Cohen’s d = 0.46; p = 0.002) with no changes in M-response amplitudes, expressed as a proportion of M max ( t [14] = 1.76, p = 0.10) (Fig. b). There were no significant differences in maximal voluntary handgrip force between the with (325.7 ± 19.3 N) and without (331.0 ± 18.0 N; t [14] = − 1.15, p = 0.26) vascular occlusion conditions. Together with the results of maximal voluntary handgrip force in the force perception experiment, our findings indicate that transient vascular occlusion has no effect on maximal voluntary handgrip force, regardless of prior experience of handgrip force exertion with vascular occlusion.
In the present study, vascular occlusion by the application of a tourniquet to the proximal end of the reference (right) upper arm increased handgrip force by the indicator (left) hand in a contralateral force-matching task. This finding indicates that vascular occlusion causes the sensation of effort to increase, as evidenced by the overestimation of the exerted handgrip force. Furthermore, vascular occlusion significantly attenuated not only H-response amplitudes of the FCR in response to median nerve stimulation, but also MEP amplitudes in the FCR muscle in response to TMS over the left M1 in the resting state. However, vascular occlusion combined with low-intensity handgrip contraction did not affect the bEMG or the MEP amplitude or latency in the FCR muscle. Together, these results suggest that vascular occlusion instantaneously inhibits both spinal motoneuron and corticospinal tract excitability, thus resulting in an overestimation of perceived force exertion. Moreover, the neural aspects of this force overestimation may be caused by motor-related cortical areas functioning as the source of excitatory input to the M1 and/or the corticospinal tract because the handgrip force level would be unable to be maintained without such a compensatory input to the spinal motoneurons via the M1 from motor-related cortical areas with increased activity in the neural centers. It has been reported that, in the resting state, blood flow of the brachial artery is acutely restricted immediately after the initiation of occlusion (applied by an occlusion cuff attached to the proximal end of the upper arm at a pressure of 100 mmHg); during the occlusion, blood flow is maintained at approximately one-sixth of that before occlusion . These observations indicate that, although it may not suppress the circulation completely (leading to vasodilatation), moderate vascular occlusion may compress the underlying arteries and veins , and might cause blood pooling in the capacitance vessels of the distal portion of the arm, with a concurrent decrease in blood flow through the arteries. Upon occlusion pressure release (i.e., reperfusion), blood flow immediately returns toward its resting level . This post-occlusive hyperemia is enhanced only by low-intensity muscle contraction . Similarly, a local distension of the vascular network with an enhancement of post-occlusive hyperemia is observed in low-intensity cycling exercise with vascular occlusion (applied by an occlusion cuff attached to the proximal end of both thighs), as a greater change in the diameter of the superficial femoral artery . Vascular occlusion triggers the activity of group III and IV muscle afferent fibers even in the relaxing phase (without muscle contraction); this activity is directly proportional to the blood flow rate before occlusion . The instantaneous activity of group III and IV muscle afferent fibers by arterial occlusion leads to more activated muscle (ischemic) contraction than that with natural blood flow (i.e., not ischemic) – . Furthermore, the acute reduction of arterial supply to the contracting muscle activates a new group of fibers, predominantly belonging to group IV, that is silent during normal contraction; these fibers are more activated by ischemic contractions than by those that are not ischemic , . It is thus reasonable to assume that the instantaneous activity of the group III and IV mechanosensitive units was increased in the ischemic handgrip contractions in the present study, although the vascular occlusion never caused pain (i.e., not as nociceptive stimuli), as indicated by unchanged pain/discomfort scale scores . This may be because the vascular occlusion in the current study led to local distension of the vascular network. The vascular occlusion-induced instantaneous activity of group III and IV mechanosensitive units may have been partly responsible for the reduced H-response amplitudes (i.e., decreased reflex responses to the active motoneuron pool) in the condition with a 60-s vascular occlusion. Although few studies have investigated the impact of acute ischemia on the H-response and M-wave, one study reported that acute ischemia for 5 min via femoral artery occlusion reduces spinal excitability, as determined by the soleus H-reflex . However, the precise mechanisms underlying such an instantaneous reduction in H-response amplitudes remain unclear. In addition to mechanosensitive afferent fiber activity, the activity of metabosensitive afferent fibers is also increased by ischemic muscle contraction, and is additive to mechanosensitive afferent fiber activity. Vascular occlusion at approximately 100 mmHg for 5 min significantly increases plasma lactate concentrations even in the resting state; in combination with muscle contraction within 5 min, it markedly increases plasma lactate concentrations , , . The resulting acidic intramuscular environment can stimulate sympathetic nerve activity through chemoreceptive reflexes, which are mediated by intramuscular metaboreceptors and group III and IV afferent fibers . Moreover, the firing of these mechanosensitive and metabosensitive group III/IV muscle afferents not only decreases spinal motoneuron excitability with a marked decline in the motor unit discharge rate , but also decreases the excitability of the contralateral M1 47 . In accordance with these neurophysiological alterations, vascular occlusion (applied by a tourniquet to the proximal end of the upper arm) in the present study significantly decreased not only the amplitudes of H-responses, but also the amplitudes of MEPs in the resting state, resulting in the corticospinal pathway generating less force for the same voluntary drive during occlusion. Nevertheless, no significant changes were observed in the amplitudes of MEP, bEMG, or superimposed twitch force during vascular occlusion combined with handgrip contraction. Together, these findings suggest that inhibition of the M1 and/or the corticospinal tract and spinal motoneurons may be compensated for by motor-related cortical areas functioning as the source of excitatory input to the spinal motoneurons to recruit more motoneurons to drive to muscles during the overestimation of exerted force. This is because handgrip force levels would be unable to be maintained without such a compensatory input to the M1 and/or the corticospinal tract from motor-related cortical areas. It is thus reasonable to assume that force overestimation during vascular occlusion is caused by motor-related cortical areas functioning as the source of excitatory input to spinal motoneurons via the corticospinal tract and/or the M1, with increased activity in the neural center (voluntary drive). This speculation is also supported by some previous studies , that suggest that the brain takes signals generated upstream of the M1 as an indicator of motor effort magnitude. Furthermore, the participants were estimating levels of handgrip force based on the perceived sense of effort , which may be linked to activity in neural centers upstream of the motor cortex , rather than a corollary discharge of the motor command. A similar explanation likely applies to our results, indicating an increased silent period duration during handgrip contraction with vascular occlusion. When TMS is applied over the M1 during a voluntary contraction, the MEP is followed by a period of near silence in EMG, which lasts for more than 200 ms with a high-intensity stimulus , . Silent periods longer than 100 ms following TMS over the M1 are caused by inhibition within the cortex , , ; an increased silent period duration suggests increased cortical inhibition , . Thus, our observed lack of significant changes in the amplitudes of MEP, bEMG, and superimposed twitch force despite increased cortical inhibition may be because a compensatory function of motor-related cortical areas may act on the M1 and/or the corticospinal tract during the overestimation of exerted force. This speculation is supported by the current result of unchanged handgrip force with and without occlusion; a compensatory input to the M1 and/or the corticospinal tract from motor-related cortical areas would enable participants to maintain the exerted force level, leading them to believe that both hands were using the same force in the force matching task. It should be noted, however, that the precondition for interpreting alterations in the occlusion condition without any detectable changes in MEP amplitude was that the peripheral properties of the neuromuscular system were very similar throughout the two conditions. In addition to the proposed cortical compensatory mechanism for a reduction in spinal and corticospinal excitability with vascular occlusion, presumably caused by increased III/IV afferent inhibition, we cannot deny that force overestimation may also be triggered by events at the spinal level. Vascular occlusion reportedly attenuates H-response amplitudes of the FCR in response to median nerve stimulation, thus suggesting that the decreased afferent (e.g., somatosensory) input to motoneurons from the occluded limb is induced by the promotion of presynaptic inhibition . This may be partly because MEP amplitude in the resting state was significantly decreased in the occluded condition. It is therefore possible that the overestimation of exerted force during occlusion may be caused by less feedback regarding afferent information for the same handgrip force level. These neurophysiological alterations at the spinal level may also be involved in increasing the silent period duration. This speculation is supported by the reported attenuation of P23 amplitudes generated in the contralateral somatosensory cortex (S1) . However, we believe that participants might place an exclusive emphasis upon the sense of effort in situations in which there is a large mismatch between the sense of muscular force (mediated by large-diameter cutaneous, joint, or muscle afferents) and the sense of effort, as previously shown , , . The present study provides evidence that rapid force overestimation during unfatigued contraction with vascular occlusion might be triggered by the instantaneous inhibition of both spinal motoneurons and corticospinal tract excitability, which may be caused by compensatory activity in motor-related cortical areas, with increased voluntary drive. That is, vascular occlusion was able to briefly enhance the motor system state to maintain the same level of output force. Given its small mechanical stress and instantaneous enhancing effect on motor system activity, a combination of low-intensity muscle contractions and moderate vascular occlusion is potentially useful for accelerating the recovery of muscular strength in aged people (including bedridden older adults) and for improving muscular function during postoperative rehabilitation , . This enhancing effect on motor system activity during ischemic contraction will also be useful for athletes and coaches because human voluntary force exertion in the motor system is relatively inhibited, and there is a latent ability to produce additional force that is unable to be produced during ordinary force exertion , – . A combination of vascular occlusion and MVC will induce the enhancing effect on motor system activity, thus resulting in additional force production – . This is because maximal voluntary force is unchanged with and without vascular occlusion, despite vascular occlusion-induced motor system inhibition; in the present study, the motor system inhibition was complemented under restricted blood flow. If there is no excitatory neuronal input to the M1, which corresponds to the occlusion-induced inhibition of M1 and/or corticospinal tract, and spinal motoneurons, then muscular force production levels will decrease when combined with vascular occlusion. In practice, repetitive muscular contractions combined with vascular occlusion in long-term exercise training can induce increased muscle mass and muscular strength, even when the levels of muscular force are much lower than those expected to induce muscular hypertrophy – . The neural mechanisms underlying the effects of an externally applied occlusive stimulus have been interpreted as the additional recruitment of fast-twitch fibers in an ischemic condition caused by muscle fatigue , , , . That is, fast-twitch muscle fibers would be preferentially and/or additionally activated even if the level of force were much lower than that expected to recruit them because of muscle fatigue (with increased metabolic products caused by muscular contraction) , , , , . However, the preferential and/or additional recruitment of fast-twitch fibers would also be induced by increased compensatory neuronal inputs to spinal motoneurons immediately after vascular occlusion. Several possible limitations of the present study should be considered. First, we cannot deny the possibility that ischemia of the forearm and/or the tourniquet itself causes the mechanical deformation of the nerves and interferes with conduction of the peripheral nerve. Indeed, several previous studies have demonstrated that tourniquet-induced vascular occlusion of the upper arm markedly attenuates the amplitudes of early-latency somatosensory evoked potentials and Erb’s potentials to median nerve stimuli at the wrist , . However, we previously observed no significant changes in the peak latencies and amplitudes of nerve action potentials, early-latency somatosensory evoked potentials (i.e., N20) with median nerve stimuli during arterial occlusion (250 mmHg) . This inconsistency in the effects of vascular occlusion on median nerve function may be mainly caused by differences in occlusion duration rather than degrees of occlusion. In previous studies, the vascular occlusion duration was relatively long (from 24 min to 30 min ), whereas this duration did not exceed 150 s in our previous experiments or 60 s in the current study. It may therefore be that vascular occlusion with a short duration (≤ 150 s) does not induce any deterioration in median nerve function, at up to a tourniquet-induced inflation pressure of 250 mmHg. This conclusion is supported by the results of our M-response experiments, in which no significant differences between the control and occluded conditions were observed in the amplitudes or time integrals of M-waves, indicating that such arterial occlusion in the upper arm does not produce any substantial changes in the M-response. Second, we must consider that the relatively small number of TMS trials (10) might have produced a large variability in MEP amplitudes, although the number of trials in the present study was decided based on measurements of MEP size in a previous study . However, we believe that even if this effect occurred, it was unlikely to have canceled out the observed attenuation of MEP amplitudes caused by vascular occlusion during a resting state. This is because we were able to observe significant MEP amplitude attenuation in the occluded condition, despite relatively high variability in MEP amplitudes. Moreover, to prevent such confounding effects as much as possible, we investigated the effects of vascular occlusion on the motor system in a resting state before the unilateral force-matching task in the TMS experiment. Under these conditions, there were no differences in the numbers of TMS trials between participants, suggesting that there were no effects of TMS trial numbers during handgrip contractions on MEPs in a resting state. Third, methodological limitations should be noted because the present study used three different experiments: bilateral force perception, unilateral TMS, and H-response experiments. We used the contralateral force-matching method in the bilateral force perception experiment to allow quantification of the ongoing perception of exerted muscular force. However, the application of TMS over the contralateral (left) M1 during the unilateral (right) muscular contraction in the force-matching task may have affected the formation of force perception more or less because of transient decreases in exerting force immediately after TMS. Thus, unlike in the bilateral force-matching task, the application of TMS over the left M1 during right muscular contraction had to be performed during handgrip force exertion by the right hand in the unilateral TMS experiment. Regarding H-response measurements, we used the H-reflex during voluntary muscle contraction for facilitation because of the relative difficulty and large variability when evoking the H-reflex in the FCR without facilitation . The contraction-induced H-reflex method used in the present study, in which participants hold a light weight (e.g., 0.50 kg) to facilitate the H-reflexes of individual muscles (e.g., the FCR), has good reliability in terms of the amplitude and latency of the response , , . We therefore believe that the contraction-induced H-response results reflect the activity in the spinal cord during ongoing voluntary muscle contraction with or without vascular occlusion. Nonetheless, we must note that, because of methodological differences in three different experiments, we cannot assume that the activities of the descending motor pathways were the same. We therefore recommend the careful interpretation of the present results: rapid force overestimation during vascular occlusion might be triggered by the instantaneous inhibition of both spinal motoneurons and corticospinal tract excitability. To address these limitations, a future study should investigate the effects of vascular occlusion on excitatory input from the motor cortex to the corticospinal tract and/or frontal and parietal cortex activity using functional magnetic resonance imaging.
|
Differences in the upslope of the precordial body surface ECG T wave reflect right to left dispersion of repolarization in the intact human heart | eca4b0f4-fd96-44d1-b124-bf2816b9764b | 6546969 | Physiology[mh] | The relationship between intracardiac repolarization of the intact human heart and the surface electrogram T wave (SECG TW ) is poorly understood. Several markers of repolarization, including QT interval, JT interval, and Tpeak-Tend (TpTe), have been associated with an increased risk of cardiac events, but their relationship to local intracardiac repolarization is poorly understood. Although T waves recorded directly on the intracardiac surface can accurately determine local repolarization time (RT), , the SECG TW is thought to represent a far-field recording displaying a summary of repolarization of the entire heart. Hence, there is uncertainty as to what SECG TW markers represent within the heart. , , , The normal SECG TW is upright in almost all leads and is concordant to the QRS complex. Yet at the cellular level, depolarization and repolarization reflect current flow in opposite directions. It is hypothesized that in order for the SECG TW to be concordant, waves of depolarization and repolarization must travel in opposite directions. Several studies have demonstrated opposing depolarizing and repolarizing apicobasal wavefronts. , , Other studies have demonstrated a transmural repolarization gradient , and have suggested that TpTe represents transmural dispersion of repolarization. However, the repolarization sequence of the intact human ventricle is related to the sequence of activation, , and this may impart changes on the SECG TW . This study aimed to examine the association between intracardiac ventricular repolarization in the intact human heart and the SECG TW , at varying cycle lengths and activation wavefronts, in order to better understand the genesis of the SECG TW and examine the extent to which it represents local intracardiac repolarization.
Patient demographics Ten patients (mean age 35 ± 15 years; 6 men) with structurally normal hearts undergoing diagnostic electrophysiological study were enrolled. The study was approved by the local ethics committee and conformed to the Declaration of Helsinki. All patients gave informed consent. Intracardiac recording and surface T-wave assessment Our methodology has been described previously and in the . In brief, decapolar catheters were placed in the right ventricle (RV) and lateral wall of the left ventricle (LV) for recording in an apicobasal orientation and the epicardium of the LV (LV epi ) via the lateral cardiac vein of the coronary sinus (CS) for recording transmurally across the LV wall ( and ). This configuration allowed us to assess ventricular repolarization across the apicobasal, LV–RV, and endo–epi axes. Restitution curves were performed by pacing in 3 separate regions within the heart: RV apex, LV endo at the base, and LV epi at the base (for further details see the ). Data analysis At each S2 interval for every pacing location (RV, LV, and CS), SECG TW markers ( ) were assessed and compared with simultaneously recorded unipolar intracardiac repolarization times (UEGMRT) in the LV and RV ( ). All 12 leads of the SECG (unipolar and bipolar) were analyzed to enable understanding of the relationship between intracardiac repolarization and the clinical SECG TW . In the unipolar contact electrograms, activation time (AT) and RT were measured at the minimum of the first derivative, min(dV/dt), of the signal within the depolarization phase and at the maximum of the first derivative, max(dV/dt), of the signal during the T wave, respectively ( ). The activation recovery interval, a standard surrogate of local action potential duration, was measured as RT – AT ( ). Dispersion of repolarization was computed as the interval between minimum and maximum RT. For every beat, ECG marker differences across individual leads and between leads were assessed for comparison of repolarization dispersion in the major anatomic axis. Apicobasal repolarization differences (measured as the largest difference between apex minus base RTs with the heart), transmural dispersion of repolarization of the LV basal wall (measured as the largest difference between endocardial minus epicardial RT), and right to left repolarization dispersion (measured as the largest difference of right and left RTs). SECG TW was analyzed for time of onset of the T wave (Ton), peak of the T wave (Tpeak), and end of the T wave (Tend), in every ECG lead in every patient ( ) at every cycle length. Tpeak was identified as the maximum of upright and minimum of inverted T waves, whereas Ton was localized as the local inflection point at the onset of the T wave. Tend was calculated using the tangent method as the intersection between the tangent to the latest flank of the T wave and baseline. This allowed comparison of T-wave duration, Tpeak, earliest Ton to latest Tend, and differences between the upslope ends between different leads and intracardiac repolarization during a single beat across all 12 ECG leads and their association of repolarization dispersion in the major anatomic axes. In this study, T-wave upslope refers to the ascending flank of the T wave, and “upslope end” refers to the end of the ascending flank of a T wave, which corresponds to Tpeak in upright T waves and to Tend in inverted T waves. The interval between early and late upslope end was used as an estimate of repolarization dispersion. In total, 23,946 individual SECG TW were analyzed and compared to regional UEGMRT to assess the association of SECG TW to UEGMRT regardless of pacing cycle length and activation wavefront. From 60–70 restitution S2 points were collected per drive train, and this was performed in 3 different revisions of the heart in 20 patients, with 12 surface ECG leads connected to the patients, for a total of 24,482 individual S2 T waves, of which 536 were discarded because of ECG noise. All markers were measured with the semi-automatic bespoke MatLab interface as in previous studies and manually corrected if needed. Statistical analysis Comparisons between measured intracardiac RT and SECG markers were assessed using a paired T test. Measurement similarity between SECG T-wave markers and the intracardiac T wave were assessed by calculating the intraclass correlation coefficient (ICC), using a 2-way mixed model of absolute agreement. The relationship between the upslope of the T wave on the SECG, regardless of polarity, and regional intracardiac RT was assessed using sensitivity and specificity analysis. The relationship between the dispersion of repolarization and measures within the ECG T wave were assessed using ICC and R 2 of linear regression. P ≤.05 was considered statistically significant. Statistical analysis was performed using R statistical computing software (version 3.2.2).
Ten patients (mean age 35 ± 15 years; 6 men) with structurally normal hearts undergoing diagnostic electrophysiological study were enrolled. The study was approved by the local ethics committee and conformed to the Declaration of Helsinki. All patients gave informed consent.
Our methodology has been described previously and in the . In brief, decapolar catheters were placed in the right ventricle (RV) and lateral wall of the left ventricle (LV) for recording in an apicobasal orientation and the epicardium of the LV (LV epi ) via the lateral cardiac vein of the coronary sinus (CS) for recording transmurally across the LV wall ( and ). This configuration allowed us to assess ventricular repolarization across the apicobasal, LV–RV, and endo–epi axes. Restitution curves were performed by pacing in 3 separate regions within the heart: RV apex, LV endo at the base, and LV epi at the base (for further details see the ).
At each S2 interval for every pacing location (RV, LV, and CS), SECG TW markers ( ) were assessed and compared with simultaneously recorded unipolar intracardiac repolarization times (UEGMRT) in the LV and RV ( ). All 12 leads of the SECG (unipolar and bipolar) were analyzed to enable understanding of the relationship between intracardiac repolarization and the clinical SECG TW . In the unipolar contact electrograms, activation time (AT) and RT were measured at the minimum of the first derivative, min(dV/dt), of the signal within the depolarization phase and at the maximum of the first derivative, max(dV/dt), of the signal during the T wave, respectively ( ). The activation recovery interval, a standard surrogate of local action potential duration, was measured as RT – AT ( ). Dispersion of repolarization was computed as the interval between minimum and maximum RT. For every beat, ECG marker differences across individual leads and between leads were assessed for comparison of repolarization dispersion in the major anatomic axis. Apicobasal repolarization differences (measured as the largest difference between apex minus base RTs with the heart), transmural dispersion of repolarization of the LV basal wall (measured as the largest difference between endocardial minus epicardial RT), and right to left repolarization dispersion (measured as the largest difference of right and left RTs). SECG TW was analyzed for time of onset of the T wave (Ton), peak of the T wave (Tpeak), and end of the T wave (Tend), in every ECG lead in every patient ( ) at every cycle length. Tpeak was identified as the maximum of upright and minimum of inverted T waves, whereas Ton was localized as the local inflection point at the onset of the T wave. Tend was calculated using the tangent method as the intersection between the tangent to the latest flank of the T wave and baseline. This allowed comparison of T-wave duration, Tpeak, earliest Ton to latest Tend, and differences between the upslope ends between different leads and intracardiac repolarization during a single beat across all 12 ECG leads and their association of repolarization dispersion in the major anatomic axes. In this study, T-wave upslope refers to the ascending flank of the T wave, and “upslope end” refers to the end of the ascending flank of a T wave, which corresponds to Tpeak in upright T waves and to Tend in inverted T waves. The interval between early and late upslope end was used as an estimate of repolarization dispersion. In total, 23,946 individual SECG TW were analyzed and compared to regional UEGMRT to assess the association of SECG TW to UEGMRT regardless of pacing cycle length and activation wavefront. From 60–70 restitution S2 points were collected per drive train, and this was performed in 3 different revisions of the heart in 20 patients, with 12 surface ECG leads connected to the patients, for a total of 24,482 individual S2 T waves, of which 536 were discarded because of ECG noise. All markers were measured with the semi-automatic bespoke MatLab interface as in previous studies and manually corrected if needed.
Comparisons between measured intracardiac RT and SECG markers were assessed using a paired T test. Measurement similarity between SECG T-wave markers and the intracardiac T wave were assessed by calculating the intraclass correlation coefficient (ICC), using a 2-way mixed model of absolute agreement. The relationship between the upslope of the T wave on the SECG, regardless of polarity, and regional intracardiac RT was assessed using sensitivity and specificity analysis. The relationship between the dispersion of repolarization and measures within the ECG T wave were assessed using ICC and R 2 of linear regression. P ≤.05 was considered statistically significant. Statistical analysis was performed using R statistical computing software (version 3.2.2).
Polarity of surface ECG T wave in relation to intracardiac electrogram The amplitude and polarity of the precordial SECG TW depend on the repolarization sequence within the myocardium ( ). During RV pacing ( ), there is trend toward a positive SECG TW in leads V 1 –V 4 and a negative SECG TW in V 5 –V 6 . This matches the distribution within the myocardium ( ), where early repolarizing sites (RV base and apex) have a positive EGM TW , whereas late repolarizing sites (LV basal epicardium, basal endocardium, and apex) have low-amplitude or negative EGM TW . During LV pacing at the basal endocardium ( ) and basal epicardium ( ), the opposite pattern in precordial lead SECG TW polarity is observed, with a negative amplitude in the RV leads (V 1 –V 2 ) and a positive amplitude in the LV leads (V 5 –V 6 ). This again corresponds to the pattern of regional intracardiac repolarization ( and ). The limb leads on the SECG TW showed no consistent pattern in SECG TW polarity, despite the change in repolarization dispersion, in the apicobasal, transmural LV, and RV to LV orientation ( ). and show the intraclass correlation (ICC) between regional EGM TW and SECG TW amplitude, including all pacing sites, through the whole of the restitution protocol in all patients. Strong agreement is demonstrated between V 1 and V 2 and the amplitude of EGM TW at the RV base ( ICC 0.78 and 0.61, respectively; P <.001). Moderate agreement was demonstrated between V 6 and the LV base endocardially (ICC 0.3; P <.001) and epicardially (ICC 0.28; P <.001). Relationship of intracardiac repolarization to markers on the SECG Independently of the pacing site, the earliest Ton in the SECG TW always preceded intracardiac RT (difference between Ton and earliest intracardiac RT = –85 ± 45 ms; P <.001), whereas the latest Tend in the SECG TW always followed RT (difference between latest Tend and latest intracardiac RT = 43 ± 25 ms; P <.001). The proportion of sites that repolarized before T peak on the SECG showed significant heterogeneity between the LV and RV, based on the location of the pacing site. During RV pacing, a greater proportion of RV sites repolarized before T peak, whereas LV sites repolarized after T peak ( and ). During LV pacing both endocardially and epicardially, a greater proportion of LV sites repolarized before Tpeak ( and , and ). Relationship between regional intracardiac repolarization to SECG TW upslope shows the relationship between regional intracardiac RT to the morphology of the SECG TW in leads V 1 and V 6 , in a single beat in 1 patient during pacing from the RV, LV endocardium, and LV epicardium. Consistency between the morphology and polarity of regional EGM TW and the most proximal SECG TW is confirmed. Furthermore, local repolarization within each cardiac region consistently occurred during the upslope of the most proximal SECG TW : • During RV pacing, early RV repolarization occurred within the upslope of SECG TW in V 1 , whereas late LV repolarization occurred within the upslope of SECG TW in V 6 ( , insets). • During LV endocardial and epicardial pacing, early LV repolarization occurred within the upslope of SECG TW in V 6 , whereas late RV repolarization occurred within the upslope of SECG TW in V 1 ( and , insets). Statistical analysis in all patients, cycle lengths, and pacing sites confirmed this observation ( and ). RV endocardial RTs, including all measured regions from apex to base, occurred on the SECG TW upslope in V 1 , V 2 , and V 3 , with sensitivity of 0.89, 0.91, and 0.84, and specificity of 0.67, 0.68, and 0.65, respectively. As the precordial SECG markers moved further away from the RV anatomically (V 4 –V 6 ), sensitivity and specificity decreased, and the limb leads showed generally poor sensitivity and specificity for repolarization moments in the RV. LV basal endocardial, epicardial, and mid-endocardial regions displayed the opposite phenomenon, with sensitivity of 0.79 and 0.8, and specificity of 0.66 and 0.67 in leads V 6 and I, respectively, but with decreasing sensitivity and specificity from leads V 5 to V 1 , and poor sensitivity and specificity in the rest of the limb leads. Finally, LV apical RTs showed poor sensitivity and specificity to the upslope of the SECG, with only aVR showing sensitivity of 0.76 but with poor specificity of 0.52. Relationship of SECG TW to dispersion of repolarization in the major anatomic axes TpTe has previously been reported as a marker of dispersion of repolarization, transmurally in the wedge preparation , or globally within the whole heart in animal studies. We studied the relationship between TpTe, the time difference between the end of the SECG TW upslope across all leads, and the difference between the start and end of the SECG TW in all leads to dispersion of repolarization in the major anatomic axes. A strong correlation was seen between right to left dispersion of repolarization and the difference between the end of the SECG TW upslope in lead V 1 vs V 6 (ICC 0.81; R = 0.45; P <.001), lead V 2 vs V 6 (ICC 0.83; R = 0.5; P <.001), lead V 3 vs V 6 (ICC 0.85; R = 0.55; P <.001), V 1 vs aVL (ICC 0.82; R = 0.5; P <.001), V 2 vs aVL (ICC 0.81; R = 0.42; P <.001), and V 3 vs aVL (ICC 0.83; R = 0.55; P <.001) ( ), regardless of T-wave polarity, pacing site, or cycle length. No strong correlations existed between the difference in the end of the SECG TW upslopes in any other lead and right to left dispersion (best ICC <0.5 for all other variables). No strong correlations existed between the difference between the end SECG TW upslopes and apicobasal (best ICC <0.5 for all measures) or transmural dispersion of repolarization (best ICC <0.5 for all measures). We were not able to demonstrate any strong correlation between TpTe measured in all 12 ECG leads, particularly in V 4 , V 5 , V 6 , or lead II on the SECG and dispersion of repolarization in the transmural (best ICC <0.2 for all measures), apicobasal (best ICC <0.12 for all measures), or right to left orientations (best ICC <0.22 for all measures) ( ). Additionally, differences between the start and the end of the T wave and the start and the end of the T-wave upslope did not demonstrate any strong correlation with dispersion of repolarization ( ) in the right to left axis (best ICC <0.47 for all measures), apicobasal axis (best ICC <0.48 for all measures), and transmural dispersion of repolarization (best ICC <0.23 for all measures).
The amplitude and polarity of the precordial SECG TW depend on the repolarization sequence within the myocardium ( ). During RV pacing ( ), there is trend toward a positive SECG TW in leads V 1 –V 4 and a negative SECG TW in V 5 –V 6 . This matches the distribution within the myocardium ( ), where early repolarizing sites (RV base and apex) have a positive EGM TW , whereas late repolarizing sites (LV basal epicardium, basal endocardium, and apex) have low-amplitude or negative EGM TW . During LV pacing at the basal endocardium ( ) and basal epicardium ( ), the opposite pattern in precordial lead SECG TW polarity is observed, with a negative amplitude in the RV leads (V 1 –V 2 ) and a positive amplitude in the LV leads (V 5 –V 6 ). This again corresponds to the pattern of regional intracardiac repolarization ( and ). The limb leads on the SECG TW showed no consistent pattern in SECG TW polarity, despite the change in repolarization dispersion, in the apicobasal, transmural LV, and RV to LV orientation ( ). and show the intraclass correlation (ICC) between regional EGM TW and SECG TW amplitude, including all pacing sites, through the whole of the restitution protocol in all patients. Strong agreement is demonstrated between V 1 and V 2 and the amplitude of EGM TW at the RV base ( ICC 0.78 and 0.61, respectively; P <.001). Moderate agreement was demonstrated between V 6 and the LV base endocardially (ICC 0.3; P <.001) and epicardially (ICC 0.28; P <.001).
Independently of the pacing site, the earliest Ton in the SECG TW always preceded intracardiac RT (difference between Ton and earliest intracardiac RT = –85 ± 45 ms; P <.001), whereas the latest Tend in the SECG TW always followed RT (difference between latest Tend and latest intracardiac RT = 43 ± 25 ms; P <.001). The proportion of sites that repolarized before T peak on the SECG showed significant heterogeneity between the LV and RV, based on the location of the pacing site. During RV pacing, a greater proportion of RV sites repolarized before T peak, whereas LV sites repolarized after T peak ( and ). During LV pacing both endocardially and epicardially, a greater proportion of LV sites repolarized before Tpeak ( and , and ).
TW upslope shows the relationship between regional intracardiac RT to the morphology of the SECG TW in leads V 1 and V 6 , in a single beat in 1 patient during pacing from the RV, LV endocardium, and LV epicardium. Consistency between the morphology and polarity of regional EGM TW and the most proximal SECG TW is confirmed. Furthermore, local repolarization within each cardiac region consistently occurred during the upslope of the most proximal SECG TW : • During RV pacing, early RV repolarization occurred within the upslope of SECG TW in V 1 , whereas late LV repolarization occurred within the upslope of SECG TW in V 6 ( , insets). • During LV endocardial and epicardial pacing, early LV repolarization occurred within the upslope of SECG TW in V 6 , whereas late RV repolarization occurred within the upslope of SECG TW in V 1 ( and , insets). Statistical analysis in all patients, cycle lengths, and pacing sites confirmed this observation ( and ). RV endocardial RTs, including all measured regions from apex to base, occurred on the SECG TW upslope in V 1 , V 2 , and V 3 , with sensitivity of 0.89, 0.91, and 0.84, and specificity of 0.67, 0.68, and 0.65, respectively. As the precordial SECG markers moved further away from the RV anatomically (V 4 –V 6 ), sensitivity and specificity decreased, and the limb leads showed generally poor sensitivity and specificity for repolarization moments in the RV. LV basal endocardial, epicardial, and mid-endocardial regions displayed the opposite phenomenon, with sensitivity of 0.79 and 0.8, and specificity of 0.66 and 0.67 in leads V 6 and I, respectively, but with decreasing sensitivity and specificity from leads V 5 to V 1 , and poor sensitivity and specificity in the rest of the limb leads. Finally, LV apical RTs showed poor sensitivity and specificity to the upslope of the SECG, with only aVR showing sensitivity of 0.76 but with poor specificity of 0.52.
TW to dispersion of repolarization in the major anatomic axes TpTe has previously been reported as a marker of dispersion of repolarization, transmurally in the wedge preparation , or globally within the whole heart in animal studies. We studied the relationship between TpTe, the time difference between the end of the SECG TW upslope across all leads, and the difference between the start and end of the SECG TW in all leads to dispersion of repolarization in the major anatomic axes. A strong correlation was seen between right to left dispersion of repolarization and the difference between the end of the SECG TW upslope in lead V 1 vs V 6 (ICC 0.81; R = 0.45; P <.001), lead V 2 vs V 6 (ICC 0.83; R = 0.5; P <.001), lead V 3 vs V 6 (ICC 0.85; R = 0.55; P <.001), V 1 vs aVL (ICC 0.82; R = 0.5; P <.001), V 2 vs aVL (ICC 0.81; R = 0.42; P <.001), and V 3 vs aVL (ICC 0.83; R = 0.55; P <.001) ( ), regardless of T-wave polarity, pacing site, or cycle length. No strong correlations existed between the difference in the end of the SECG TW upslopes in any other lead and right to left dispersion (best ICC <0.5 for all other variables). No strong correlations existed between the difference between the end SECG TW upslopes and apicobasal (best ICC <0.5 for all measures) or transmural dispersion of repolarization (best ICC <0.5 for all measures). We were not able to demonstrate any strong correlation between TpTe measured in all 12 ECG leads, particularly in V 4 , V 5 , V 6 , or lead II on the SECG and dispersion of repolarization in the transmural (best ICC <0.2 for all measures), apicobasal (best ICC <0.12 for all measures), or right to left orientations (best ICC <0.22 for all measures) ( ). Additionally, differences between the start and the end of the T wave and the start and the end of the T-wave upslope did not demonstrate any strong correlation with dispersion of repolarization ( ) in the right to left axis (best ICC <0.47 for all measures), apicobasal axis (best ICC <0.48 for all measures), and transmural dispersion of repolarization (best ICC <0.23 for all measures).
This is the first study to provide direct correlation between local repolarization in the major anatomic axis and SECG TW in the intact human heart. The main findings are as follows: (1) the amplitude/polarity of the T wave on the precordial leads reflects the polarity of the unipolar signal recorded on the underlying nearby myocardium; (2) local RTs in the RV occur along the upslope of the SECG TW in leads V 1 , V 2 , and V 3 , whereas local repolarization in the LV occurs during the SECG TW upslope in leads V 5 , V 6 , and I; (3) the difference between the end of the T-wave upslope time in V 1 minus V 6 provides a good representation of right to left dispersion of repolarization; (4) no strong markers for apicobasal or transmural repolarization differences were seen on the SECG; and (5) TpTe did not correlate with dispersion of repolarization in the right to left, apicobasal, or transmural axis. Polarity of the precordial lead SECG TW mirrors the EGM TW of the underlying myocardium The polarity and upslope of the contact EGM TW are related the local repolarization component of the underlying myocardium. , , The EGM TW is more positive when the repolarization of local tissue is early, biphasic in intermediate RTs, and negative in late depolarizing sites where the rest of the heart has repolarized. The far-field or whole heart component is represented by the downslope of the EGM TW . Our data using the well-validated Wyatt method , confirm this finding ( ), with early repolarizing sites having an upright T wave but late repolarizing sites having a negative T wave. The precordial lead SECG TW mirrored the polarity of the T wave in the underlying myocardium, with V 5 and V 6 matching the polarity of sites measured in the LV( ), whereas V 1 , V 2 , and V 3 matched the polarity of sites measured in the RV ( and ). The T waves in the limb leads displayed no pattern in relation to local repolarization, possibly because of their substantial distance from the myocardium, thus representing a far-field electrogram of whole heart repolarization. The lack of correlation between SECG TW polarity and the apex of the LV may highlight that the precordial SECG fails to extend inferiorly enough to cover the local repolarization of the LV apex and the overlap with the RV apex. Right and left heart intracardiac RTs occur along the upslope of the precordial SECG TW regardless of polarity The upslope of V 1 –V 3 showed good sensitivity to all measured RV repolarization, whereas the upslope of V 5 –V 6 and I showed good sensitivity to transmural LV basal and LV mid-myocardial repolarization ( ). These findings were independent of T-wave polarity and activation wavefront. Yamaki et al previously demonstrated that ventricular AT, measured as the QRS downstroke time on the body surface ECG, closely correlated to directly measured ventricular activation and activation delay in LV hypertrophy. Our finding that regional ECG T-wave upslope correlates with directly measured ventricular RT would be in keeping with these data as repolarization is the electrically opposite phenomenon to depolarization. It has previously been suggested that variations in the transmural gradient across the ventricular wall may inscribe the morphology of the SECG TW , but our data do not support this. Regardless of the transmural gradient ( and ), repolarization of the base of the LV occurred along the upslope of the V 6 , and this did not alter its polarity. This is perhaps due to the differences between experimental studies and our intact whole human heart studies, in which the influence of the far-field or global myocardial muscle mass has a greater influence on the SECG TW . SECG TW and dispersion of repolarization in the major anatomic axis Our data show that differences between the end of the upslope in V 1 , V 2 , and V 3 vs V 6 /aVL provide excellent correlation to right to left dispersion of repolarization ( ), regardless of the polarity of the T wave, cycle length, or activation wavefront. Poor correlation existed between TpTe and dispersion of repolarization in the transmural, apicobasal, and right to left axis ( ). This is in contrast to previous studies and again reflect differences between local and far-field electrogram components in experimental studies compared to whole heart studies. This highlights the limitation of TpTe in a single SECG TW as a measure of dispersion of repolarization. Our data suggest that the upslope in the precordial lead SECG TW represents local regional repolarization of the nearby underlying myocardium. Thus, if the SECG T wave is negative, TpTe may reflect a local repolarization component; however, if the T wave is positive, it may represent a difference between the end of a regional repolarization component and the far-field or late repolarization regions within the heart. This may explain , as TpTe is a constant measure of the balance between local and global repolarization. There was no strong relationship to apicobasal dispersion of repolarization and SECG markers ( ). Previous work has suggested that T-wave morphology may be inscribed by predominant apicobasal differences in repolarization. Meijborg et al suggested that differences in the earliest peak to the latest end of the SECG TW reflect global dispersion of repolarization within the porcine heart, in which apicobasal differences predominate. In the intact human heart, however, differences in repolarization between the thin RV and the large muscle mass of the LV may predominate, reflecting species differences. Study limitations Data were confined to multielectrode unipolar contact catheter recordings in the human heart as opposed to global mapping. This was because it was not possible to use global mapping systems in patients admitted for a minimally invasive human study. In addition, although great care was taken to transmurally oppose the catheters, true transmural recordings like those of wedge preparations or plunge electrode recordings were not possible. We did not assess global measures of TpTe as assessed in some other studies; therefore, comparisons with these studies are not possible. ECG markers of repolarization have been derived from a standard 12-lead ECG configuration. Future studies may assess the interaction between intracardiac repolarization dynamics and ECG repolarization markers derived from orthogonal leads.
TW mirrors the EGM TW of the underlying myocardium The polarity and upslope of the contact EGM TW are related the local repolarization component of the underlying myocardium. , , The EGM TW is more positive when the repolarization of local tissue is early, biphasic in intermediate RTs, and negative in late depolarizing sites where the rest of the heart has repolarized. The far-field or whole heart component is represented by the downslope of the EGM TW . Our data using the well-validated Wyatt method , confirm this finding ( ), with early repolarizing sites having an upright T wave but late repolarizing sites having a negative T wave. The precordial lead SECG TW mirrored the polarity of the T wave in the underlying myocardium, with V 5 and V 6 matching the polarity of sites measured in the LV( ), whereas V 1 , V 2 , and V 3 matched the polarity of sites measured in the RV ( and ). The T waves in the limb leads displayed no pattern in relation to local repolarization, possibly because of their substantial distance from the myocardium, thus representing a far-field electrogram of whole heart repolarization. The lack of correlation between SECG TW polarity and the apex of the LV may highlight that the precordial SECG fails to extend inferiorly enough to cover the local repolarization of the LV apex and the overlap with the RV apex.
TW regardless of polarity The upslope of V 1 –V 3 showed good sensitivity to all measured RV repolarization, whereas the upslope of V 5 –V 6 and I showed good sensitivity to transmural LV basal and LV mid-myocardial repolarization ( ). These findings were independent of T-wave polarity and activation wavefront. Yamaki et al previously demonstrated that ventricular AT, measured as the QRS downstroke time on the body surface ECG, closely correlated to directly measured ventricular activation and activation delay in LV hypertrophy. Our finding that regional ECG T-wave upslope correlates with directly measured ventricular RT would be in keeping with these data as repolarization is the electrically opposite phenomenon to depolarization. It has previously been suggested that variations in the transmural gradient across the ventricular wall may inscribe the morphology of the SECG TW , but our data do not support this. Regardless of the transmural gradient ( and ), repolarization of the base of the LV occurred along the upslope of the V 6 , and this did not alter its polarity. This is perhaps due to the differences between experimental studies and our intact whole human heart studies, in which the influence of the far-field or global myocardial muscle mass has a greater influence on the SECG TW .
TW and dispersion of repolarization in the major anatomic axis Our data show that differences between the end of the upslope in V 1 , V 2 , and V 3 vs V 6 /aVL provide excellent correlation to right to left dispersion of repolarization ( ), regardless of the polarity of the T wave, cycle length, or activation wavefront. Poor correlation existed between TpTe and dispersion of repolarization in the transmural, apicobasal, and right to left axis ( ). This is in contrast to previous studies and again reflect differences between local and far-field electrogram components in experimental studies compared to whole heart studies. This highlights the limitation of TpTe in a single SECG TW as a measure of dispersion of repolarization. Our data suggest that the upslope in the precordial lead SECG TW represents local regional repolarization of the nearby underlying myocardium. Thus, if the SECG T wave is negative, TpTe may reflect a local repolarization component; however, if the T wave is positive, it may represent a difference between the end of a regional repolarization component and the far-field or late repolarization regions within the heart. This may explain , as TpTe is a constant measure of the balance between local and global repolarization. There was no strong relationship to apicobasal dispersion of repolarization and SECG markers ( ). Previous work has suggested that T-wave morphology may be inscribed by predominant apicobasal differences in repolarization. Meijborg et al suggested that differences in the earliest peak to the latest end of the SECG TW reflect global dispersion of repolarization within the porcine heart, in which apicobasal differences predominate. In the intact human heart, however, differences in repolarization between the thin RV and the large muscle mass of the LV may predominate, reflecting species differences.
Data were confined to multielectrode unipolar contact catheter recordings in the human heart as opposed to global mapping. This was because it was not possible to use global mapping systems in patients admitted for a minimally invasive human study. In addition, although great care was taken to transmurally oppose the catheters, true transmural recordings like those of wedge preparations or plunge electrode recordings were not possible. We did not assess global measures of TpTe as assessed in some other studies; therefore, comparisons with these studies are not possible. ECG markers of repolarization have been derived from a standard 12-lead ECG configuration. Future studies may assess the interaction between intracardiac repolarization dynamics and ECG repolarization markers derived from orthogonal leads.
The upslope of the T wave in the precordial leads on the surface ECG represents regional repolarization within the underlying RV and LV. Differences between the end of the upslope in V 1 –V 3 vs V 6 /aVL represent right to left dispersion of repolarization. Further assessment of the consistency of this marker in structurally abnormal intact human hearts and its role in risk prediction is needed. There was no correlation between TpTe and dispersion of repolarization in the intact human heart.
|
Experiences of general practice care for self-harm: a qualitative study of young people’s perspectives | eda4b2f0-f80e-456b-9d33-34bc3407b979 | 8340729 | Family Medicine[mh] | Self-harm in young people is a national public health concern. Defined as self-poisoning or self-injury regardless of intent, self-harm is the strongest risk factor for suicide, increasing suicide risk by 50 times. – In young people, self-harm is thought to be influenced by biopsychosocial factors, and is associated with depression, anxiety, future self-harm episodes, and poorer educational and employment outcomes. – In young people (aged 10–24 years) there is a 26% lifetime prevalence of self-harm, with self-cutting the most prevalent method in the community. – In young people who have died by suicide, over 50% had a past history of self-harm. Episodes of self-harm in young people presenting to general practice have increased, and young people who self-harm (aged 16–24 years) see GPs the most in the NHS. – A fear of negative reactions has been identified as a barrier to accessing services for young people who self-harm, and only a few facilitators have been identified for help-seeking. A National Institute for Health and Care Excellence (NICE) self-harm guideline research recommendation is that rigorous qualitative research should explore user experiences of services. GPs report a positive attitude to providing frontline support for young people who self-harm; however, there is little published literature on young people’s experiences of, and access to, care in general practice for self-harm. The aim of this study was to explore the help-seeking behaviours, experiences of GP care, and access to general practice of young people who self-harm.
This study adopted a qualitative methodology using semi-structured interviews that enabled in-depth exploration of young people’s experiences and perspectives. This study was informed by constructionist epistemology and a critical realist theoretical stance, and acknowledged that individuals have their own subjective insights dependent on their life experiences. – This study is reported in accordance with the Standards for Reporting Qualitative Research. A patient and public involvement advisory group informed this study through revising the interview topic guide, designing recruitment strategies, and interpreting findings. Setting and participants This study was based in England. Young people aged 16–25 years, regardless of type of self-harm were eligible to participate. Recruitment Participants were recruited from the community, Twitter, and self-harm third-sector organisations. The recruitment poster was displayed around some universities in the North of England and the Midlands, local council libraries, and sixth-form colleges. A Twitter recruitment message was written with the patient advisory group and posted on the lead author’s personal account. Eight national self-harm third-sector organisations were contacted by email to ask if they would share the recruitment poster within their organisations. Recruiting purposively, to aim for maximum variation, was attempted but this proved challenging, and thus a national convenience sample was obtained. Interested eligible participants were emailed an invitation letter, study information sheet, and consent form. Data collection One author (a GP researcher with expertise in self-harm in primary care) conducted all interviews from April to November 2019. Interviews were digitally recorded, transcribed verbatim by the same author or a professional transcription company, and anonymised. Interviews were semi-structured to adequately explore and be flexible to the narratives of young people during interviews. Semi-structured interviews have previously been used with young people who self-harm. Interviews were carried out face-to-face or by telephone. A topic guide developed from the literature, research team discussion, and patient and public involvement advisory group input explored reasons for young people’s self-harm, experiences of GP care, and access to general practice care for self-harm. It was iteratively refined as data collection and analysis matured in parallel. Consent was confirmed at the start of interviews, and participants were free to withdraw from participating at any time. A study risk protocol was established in case distress was identified in participants during the study process. All participants received a ‘Staying Safe Sheet’ that listed support services for self-harm at the beginning of interviews. Face-to-face interviews were held in private meeting rooms at Keele or Birmingham Universities, and the option of a telephone interview was given. All participants were offered a 10 GBP Amazon voucher on completion of interview. Data collection stopped when data saturation (no new data were emerging) was felt to be reached. Data analysis Interview data were analysed using reflexive thematic analysis applying principles of constant comparison, compatible with a critical realist stance. – Analysis was flexible and recursive, moving between stages, and each transcript was coded by the author who conducted the interviews. All transcripts were independently coded by at least two authors. Codes were compared across transcripts, sorted into wider categories, and recorded in an analysis table to support the generation of candidate themes. Higher-level recurring themes were agreed on by all authors. Findings were presented to the patient and public involvement advisory group. Reflexivity The interviewer made field notes after each interview that supported topic guide iteration, the analysis process, and researcher reflexivity. At study meetings, researchers considered how their backgrounds influenced interpretation of the data, and their understanding of findings. The research team members have different professional backgrounds: social science, anthropology, general practice, health services research, and evidence synthesis. This, and the input of the patient and public involvement advisory group into the interpretation of findings, increases the breadth and depth of analysis, and thus the trustworthiness of findings.
This study was based in England. Young people aged 16–25 years, regardless of type of self-harm were eligible to participate.
Participants were recruited from the community, Twitter, and self-harm third-sector organisations. The recruitment poster was displayed around some universities in the North of England and the Midlands, local council libraries, and sixth-form colleges. A Twitter recruitment message was written with the patient advisory group and posted on the lead author’s personal account. Eight national self-harm third-sector organisations were contacted by email to ask if they would share the recruitment poster within their organisations. Recruiting purposively, to aim for maximum variation, was attempted but this proved challenging, and thus a national convenience sample was obtained. Interested eligible participants were emailed an invitation letter, study information sheet, and consent form.
One author (a GP researcher with expertise in self-harm in primary care) conducted all interviews from April to November 2019. Interviews were digitally recorded, transcribed verbatim by the same author or a professional transcription company, and anonymised. Interviews were semi-structured to adequately explore and be flexible to the narratives of young people during interviews. Semi-structured interviews have previously been used with young people who self-harm. Interviews were carried out face-to-face or by telephone. A topic guide developed from the literature, research team discussion, and patient and public involvement advisory group input explored reasons for young people’s self-harm, experiences of GP care, and access to general practice care for self-harm. It was iteratively refined as data collection and analysis matured in parallel. Consent was confirmed at the start of interviews, and participants were free to withdraw from participating at any time. A study risk protocol was established in case distress was identified in participants during the study process. All participants received a ‘Staying Safe Sheet’ that listed support services for self-harm at the beginning of interviews. Face-to-face interviews were held in private meeting rooms at Keele or Birmingham Universities, and the option of a telephone interview was given. All participants were offered a 10 GBP Amazon voucher on completion of interview. Data collection stopped when data saturation (no new data were emerging) was felt to be reached.
Interview data were analysed using reflexive thematic analysis applying principles of constant comparison, compatible with a critical realist stance. – Analysis was flexible and recursive, moving between stages, and each transcript was coded by the author who conducted the interviews. All transcripts were independently coded by at least two authors. Codes were compared across transcripts, sorted into wider categories, and recorded in an analysis table to support the generation of candidate themes. Higher-level recurring themes were agreed on by all authors. Findings were presented to the patient and public involvement advisory group.
The interviewer made field notes after each interview that supported topic guide iteration, the analysis process, and researcher reflexivity. At study meetings, researchers considered how their backgrounds influenced interpretation of the data, and their understanding of findings. The research team members have different professional backgrounds: social science, anthropology, general practice, health services research, and evidence synthesis. This, and the input of the patient and public involvement advisory group into the interpretation of findings, increases the breadth and depth of analysis, and thus the trustworthiness of findings.
In total, 13 interviews with young people who self-harmed were conducted. Interviews lasted between 25 and 49 min. Nine interviews were face-to-face, and four were by telephone. Participant demographic characteristics are detailed in . The age of participants ranged from 19–25 years, and participants were from the Midlands and South East England. At the time of interview 12 participants were in higher education ( n = 5 undergraduate, n = 7 postgraduate), and one in further education. Narratives of young people were attained from before, and within, education settings. The risk protocol was not activated during the study. The three themes generated are: help-seeking avenues, barriers to seeking help from general practice, and facilitators to accessing general practice care. Themes are supported by illustrative quotes. Unique identifiers for young people include their pseudonym and age. Help-seeking avenues Young people who self-harm described different avenues of help-seeking: role of significant others, non-statutory services, and NHS services. Role of significant others in supporting help-seeking Participants described how parents and friends enabled them to either seek help or had sought help on their behalf: ‘In middle school … it was brought to my parents’ attention that I had been cutting myself, and then they took me to see a therapist.’ (Hannah, 19 years) ‘But my friends kind of caught on, I didn’t tell them, but they know me well enough to realise what was going on so when they called the services, people were trying to arrange a mental health act assessment.’ (Bethany, 24 years) Some participants reported how their partners had influenced their help-seeking behaviour and supported their candidacy (one’s eligibility for medical intervention jointly negotiated between individuals and health services) for care, – such as: ‘I really felt pressured [from her partner] … to go and seek help, but I myself did not want to do it, but then eventually I did go, and I hmmm obviously didn’t really like doing it first, because I don’t even for physical problems, I don’t go to the GP, I am like, just leave it … do things on my own … and it was my first time I probably said it out loud that I was self-harming.’ (Lucy, 24 years) Other participants described how other people had hindered their ability to seek help for self-harm: ‘And as I got older, I became responsible for caring for my sister. She was very, very, very, very ill for a really long time and I think when I first sought help for it all, was when she had a dissociative identity disorder and, er, when I first sought help for my own self-harm was when I’d taken her in for hers.’ (Divya, 23 years) Participants reflected on how parents, partners, and friends could act as enablers for help-seeking and support participant candidacy, whereas others described how other people could hinder their efforts to seek help. Non-statutory services Participants described seeking help for self-harm from higher education services, third-sector organisations, the internet, and private services: ‘Last year when that did happen, I made a counselling [university service] appointment, I think I went to one or two and then I just picked myself back up.’ (Hannah, 19 years) ‘Getting to a point where I was contacting Samaritans like every single day, erm, and getting fed up and reaching the point where like I wouldn’t seek help and you know; I wouldn’t be alive kind of thing.’ (Ian, 21 years) Participants also described seeking help and support from online resources: ‘I looked at a few online resources, so pamphlets … and I think that was quite helpful to relax me about the situation I was in and what support systems are available out there, so I don’t feel alone; I think it just helped me be nudged in the right direction to at least try.’ (Jemima, 25 years) Ian detailed that he had sought private psychological therapy following dissatisfaction with statutory services: ‘I had a further six assessments with them [community mental health team] that came to nothing and I reached a point where I was just like you know … I need to do something, like this is becoming a critical stage and so I was like … I need private therapy, I need something to keep me alive.’ (Ian, 21 years) Young people described barriers to help-seeking from non-statutory services. Marie shared her experience of seeing up to eight counsellors, but she explained she needed to be ready to seek help for self-harm, highlighting a challenge young people may face when struggling with self-harm: ‘I don’t think I was quite there in my mind that I was as worse as I was or bad as I was … I know last year for example I was in not a very good place at all and I now know that even though I don’t like it, necessarily talking to counsellors … it will help me in the long run.’ (Marie, 19 years) NHS services Some participants described experiences of seeking help from primary care services: ‘And the first person I spoke to, was the pharmacist … he was totally calm about it … but it was the changing point in my life that I actually realised that it’s not something to be ashamed of.’ (Kate, 22 years) Participants stated how they sought help for self-harm through the NHS community-based Improving Access to Psychological Therapies service. Many participants shared negative experiences: ‘I think the first one was erm, there was a CBT [cognitive behavioural therapy] experiment afterwards, these are in like a year break of each other but erm, the CBT person, they wanted to do it over the phone which I found more difficult to begin with and then they were half an hour late for the appointment on the phone so I found that like “okay, you’re not going to turn up to a phone appointment on time then I don’t think that this would work”.’ (Gemma, 25 years) As Gemma highlights, psychological therapies over the telephone does not suit some young people. Participants also described varied experiences of seeking help through NHS mental health services: ‘The first time I spoke to someone about it, it was honestly the most useless … first of all they just told me I was being attention seeking [Child and Adolescent Mental Health Services counsellor] so I just kind of, yeah … it took me a while to look for help again … she wasn’t really listening to what I was saying and as she was just finishing the sentences for me.’ (Emily, 23 years) A separate experience was shared by Emily that was in contradiction to her first experience with mental health services: ‘… they [mental health access team] did actually like em, ‘cause it was the first time that kind of, that I actually thought that I was being listened to and like, they were trying to actually figure out ways to help me rather than just completing my thoughts.’ (Emily, 23 years) Some participants felt frustrated when they were deemed not eligible for NHS psychological therapy services after being referred by clinicians as they struggled with self-harm: ‘I’ve been referred to psychotherapy ‘cause of my diagnosis [diagnosis not disclosed] and they’ve gone, “we can do eight sessions of CBT but we don’t think it’s going to achieve anything and you’re still hurting yourself and it’s against our policy to do that”.’ (Divya, 23 years) Barriers to seeking help from general practice Young people reflected on their experiences consulting GPs for self-harm. They described what influenced future help-seeking, and how preconceptions of GP care, knowledge on self-harm and of accessing general practice, and fear of consulting GPs were barriers to accessing general practice care. Expectations not met Some participants vividly described experiences of feeling that a GP did not fully explore their problem and seemed to rely on prescribing as a management option: ‘But he sort of went “okay we’ll just put you on anti-depressants and see you every two weeks and let’s see what does, see if your mood increases, if anything happens, if you stop self-harming, if things decrease” … I ended up maxing out on the amount you can get with anti-depressants within like six months, and they weren’t sitting well with me.’ (Gemma, 25 years) Some accounts revealed tensions in perspectives of experience of previous consultations: ‘It was … hmphh [small sigh] … it was pretty positive, I mean he was, he was understanding, very non-judgemental, warm, I felt comfortable telling him everything … one thing that did not feel quite right was the way he responded … like I told him I don’t know … “I have a sore throat”.’ (Lucy, 24 years) Lucy suggested that although she found the GP to be considerate and this supported her self-harm disclosure, she thought he responded casually to her disclosure of self-harm. Preconceptions and fears Young people held preconceived views of GPs’ care for self-harm, which acted as a barrier and hindered them seeking support for self-harm through general practice: ‘From what I’ve noticed from others and my own experiences is that they don’t really get good experiences straight away, and then like I said, it took me such a long time before I actually tried again.’ (Emily, 23 years) Young people vividly shared fears of being admitted to hospital, loss of confidentiality, and stigma as barriers for accessing support from general practice: ‘I thought they’d hospitalise me immediately. I thought they’d panic and push me away as if, “no, you’ve gotta — you know, you’ve got to go into an inpatient unit, and we’ve got to inform your family, and you’ve got to quit your course”.’ (Kate, 22 years) ‘Erm, it was really, really difficult erm, it’s hard enough trying to get an appointment these days, erm, and it’s when they ask you on the phone, “can I ask you like why you need this appointment”, I just lied and just said, you know, I feel ill … I felt a lot of shame in it.’ (Ian, 21 years) As highlighted by Ian, a fear of stigma around self-harm was found to be a barrier to accessing care. Experiences of consulting GPs influence help-seeking Young people reflected on the importance of GPs’ responses to self-harm when discussing self-harm in the consultation and provided insights into the impact these had on help-seeking, highlighting the potential consequences of a negatively perceived GP consultation. Some participants described that their experience of seeing GPs for self-harm recursively affected their decision to seek help for self-harm in the future: ‘Err I dunno it was easy to get a doctor’s appointment but the help I got it wasn’t any help as it then put me back another three years until I got help again … like they printed out a form about manic depression and generalised anxiety disorder and then that was it and that’s all I got … nothing and then I never bothered until another three years later.’ (Catherine, 21 years) Other participants, however, reflected on how positive experiences of consulting a GP had resulted in them seeking help from GPs in the future: ‘I had a really great experience, being able to share that with someone new, also my comfort zone and he [GP] just became my support so if I ever had to have an appointment, I just went to him for continuity.’ (Jemima, 25 years) Some participants shared that if they felt self-harm was dismissed by GPs after it was mentioned in the consultation, their future help-seeking behaviour changed as a result: ‘I left the conversation feeling perhaps I was assigning more importance to this that it requires … because I said “if the GP is not too concerned, I shouldn’t be” … I felt I needed to tell him … that I’m actually overdosing on them [prescribed antidepressants] … I did tell him, and once again I didn’t get any reaction … so I decided to stop my medication without telling him and I never attended another appointment with him … I’ve never been to see the GP since and it’s been six months.’ (Lucy, 24 years) Lack of knowledge on self-harm and accessing support Participants also described how a lack of knowledge about self-harm and its risks, and accessing care in general practice was a barrier to seeking support from general practice: ‘There’s sort of a lack of knowledge around the healthcare system works and booking GP appointments is scary … also it’s sort of around knowledge where I didn’t necessarily realise that me self-harming was wrong or was a sign that I will [do so] for a very long time.’ (Divya, 23 years) These barriers are mapped to a candidacy framework and presented in . Facilitators to accessing general practice care Listening and acting Participants described positive consultation experiences when GPs were proactive in assessing and managing their self-harm: ‘Yeah, yeah, yeah it was erm, yeah, my GP here is very helpful, he gives me different, he gives me options and then explains to me which erm, how each go, which things that would be best for me.’ (Emily, 23 years) Some participants shared experiences of feeling that GPs were active listeners and non-judgemental, and involved them in shared decision making: ‘His patience and lack of judgement was amazing, just to listen to my experiences of what happens for emotionally when I’m self-harming, erm, it was incredible.’ (Kate, 22 years) ‘He didn’t over-react … he was really good in the way he handled things … the way he felt comfortable to talk to me about it made me comfortable, even though I didn’t feel anything at that time, I didn’t feel as though I was being judged … I was on citalopram and he discussed in detail what the side effects were, what would happen, what the benefit of sertraline were and he said, if you need me to speak to your therapist, I will … he was amenable to helping me with my self-harm.’ (Jemima, 25 years) Being understood Young people described wanting to be understood by GPs and be treated as an individual: ‘It just feels with the GPs, very erm like almost like they are reading from a script with it … as opposed to talking with you about it.’ (Marie, 19 years) ‘He could have found out more, asked to find out more and then talked to me more, or at least talked to me about maybe what I wanted to do.’ (Hannah, 19 years) Participants reflected that they want GPs to personalise their care and support to them, which may facilitate help-seeking in young people, and reduce self-harm behaviour. Relationship-based care An important facilitator identified was an ongoing relationship and continuity with GPs for self-harm care: ‘Definitely a key part is the rapport between the GP and the err … so definitely continuity, relational continuity definitely plays a role in that, especially I would have loved to have a GP I’ve known for a couple of years, perhaps I could have prevented this whole thing from happening.’ (Lucy, 24 years) Some participants also suggested that longer GP consultations would support young people accessing general practice self-harm care, and this can support relationship-based care: ‘Continuity and a good frequency of GP appointments is really helpful. You don’t build up a rapport in one appointment, it’s ten minutes, and some place it’s five minutes, you need time to do that, quite often they’ll book double appointments knowing that I’ve only got one problem.’ (Kate, 22 years) Shorter waiting times Participants described that shorter waiting times to see GPs would facilitate access to care for young people with self-harm behaviour: ‘When you self-harm you like you don’t wanna wait two weeks, you need to see them there or then, or not at all … like it’s very instant, so like if you’re gonna self-harm or have self-harmed, there is no point seeing them in two weeks.’ (Catherine, 21 years)
Young people who self-harm described different avenues of help-seeking: role of significant others, non-statutory services, and NHS services. Role of significant others in supporting help-seeking Participants described how parents and friends enabled them to either seek help or had sought help on their behalf: ‘In middle school … it was brought to my parents’ attention that I had been cutting myself, and then they took me to see a therapist.’ (Hannah, 19 years) ‘But my friends kind of caught on, I didn’t tell them, but they know me well enough to realise what was going on so when they called the services, people were trying to arrange a mental health act assessment.’ (Bethany, 24 years) Some participants reported how their partners had influenced their help-seeking behaviour and supported their candidacy (one’s eligibility for medical intervention jointly negotiated between individuals and health services) for care, – such as: ‘I really felt pressured [from her partner] … to go and seek help, but I myself did not want to do it, but then eventually I did go, and I hmmm obviously didn’t really like doing it first, because I don’t even for physical problems, I don’t go to the GP, I am like, just leave it … do things on my own … and it was my first time I probably said it out loud that I was self-harming.’ (Lucy, 24 years) Other participants described how other people had hindered their ability to seek help for self-harm: ‘And as I got older, I became responsible for caring for my sister. She was very, very, very, very ill for a really long time and I think when I first sought help for it all, was when she had a dissociative identity disorder and, er, when I first sought help for my own self-harm was when I’d taken her in for hers.’ (Divya, 23 years) Participants reflected on how parents, partners, and friends could act as enablers for help-seeking and support participant candidacy, whereas others described how other people could hinder their efforts to seek help. Non-statutory services Participants described seeking help for self-harm from higher education services, third-sector organisations, the internet, and private services: ‘Last year when that did happen, I made a counselling [university service] appointment, I think I went to one or two and then I just picked myself back up.’ (Hannah, 19 years) ‘Getting to a point where I was contacting Samaritans like every single day, erm, and getting fed up and reaching the point where like I wouldn’t seek help and you know; I wouldn’t be alive kind of thing.’ (Ian, 21 years) Participants also described seeking help and support from online resources: ‘I looked at a few online resources, so pamphlets … and I think that was quite helpful to relax me about the situation I was in and what support systems are available out there, so I don’t feel alone; I think it just helped me be nudged in the right direction to at least try.’ (Jemima, 25 years) Ian detailed that he had sought private psychological therapy following dissatisfaction with statutory services: ‘I had a further six assessments with them [community mental health team] that came to nothing and I reached a point where I was just like you know … I need to do something, like this is becoming a critical stage and so I was like … I need private therapy, I need something to keep me alive.’ (Ian, 21 years) Young people described barriers to help-seeking from non-statutory services. Marie shared her experience of seeing up to eight counsellors, but she explained she needed to be ready to seek help for self-harm, highlighting a challenge young people may face when struggling with self-harm: ‘I don’t think I was quite there in my mind that I was as worse as I was or bad as I was … I know last year for example I was in not a very good place at all and I now know that even though I don’t like it, necessarily talking to counsellors … it will help me in the long run.’ (Marie, 19 years) NHS services Some participants described experiences of seeking help from primary care services: ‘And the first person I spoke to, was the pharmacist … he was totally calm about it … but it was the changing point in my life that I actually realised that it’s not something to be ashamed of.’ (Kate, 22 years) Participants stated how they sought help for self-harm through the NHS community-based Improving Access to Psychological Therapies service. Many participants shared negative experiences: ‘I think the first one was erm, there was a CBT [cognitive behavioural therapy] experiment afterwards, these are in like a year break of each other but erm, the CBT person, they wanted to do it over the phone which I found more difficult to begin with and then they were half an hour late for the appointment on the phone so I found that like “okay, you’re not going to turn up to a phone appointment on time then I don’t think that this would work”.’ (Gemma, 25 years) As Gemma highlights, psychological therapies over the telephone does not suit some young people. Participants also described varied experiences of seeking help through NHS mental health services: ‘The first time I spoke to someone about it, it was honestly the most useless … first of all they just told me I was being attention seeking [Child and Adolescent Mental Health Services counsellor] so I just kind of, yeah … it took me a while to look for help again … she wasn’t really listening to what I was saying and as she was just finishing the sentences for me.’ (Emily, 23 years) A separate experience was shared by Emily that was in contradiction to her first experience with mental health services: ‘… they [mental health access team] did actually like em, ‘cause it was the first time that kind of, that I actually thought that I was being listened to and like, they were trying to actually figure out ways to help me rather than just completing my thoughts.’ (Emily, 23 years) Some participants felt frustrated when they were deemed not eligible for NHS psychological therapy services after being referred by clinicians as they struggled with self-harm: ‘I’ve been referred to psychotherapy ‘cause of my diagnosis [diagnosis not disclosed] and they’ve gone, “we can do eight sessions of CBT but we don’t think it’s going to achieve anything and you’re still hurting yourself and it’s against our policy to do that”.’ (Divya, 23 years)
Participants described how parents and friends enabled them to either seek help or had sought help on their behalf: ‘In middle school … it was brought to my parents’ attention that I had been cutting myself, and then they took me to see a therapist.’ (Hannah, 19 years) ‘But my friends kind of caught on, I didn’t tell them, but they know me well enough to realise what was going on so when they called the services, people were trying to arrange a mental health act assessment.’ (Bethany, 24 years) Some participants reported how their partners had influenced their help-seeking behaviour and supported their candidacy (one’s eligibility for medical intervention jointly negotiated between individuals and health services) for care, – such as: ‘I really felt pressured [from her partner] … to go and seek help, but I myself did not want to do it, but then eventually I did go, and I hmmm obviously didn’t really like doing it first, because I don’t even for physical problems, I don’t go to the GP, I am like, just leave it … do things on my own … and it was my first time I probably said it out loud that I was self-harming.’ (Lucy, 24 years) Other participants described how other people had hindered their ability to seek help for self-harm: ‘And as I got older, I became responsible for caring for my sister. She was very, very, very, very ill for a really long time and I think when I first sought help for it all, was when she had a dissociative identity disorder and, er, when I first sought help for my own self-harm was when I’d taken her in for hers.’ (Divya, 23 years) Participants reflected on how parents, partners, and friends could act as enablers for help-seeking and support participant candidacy, whereas others described how other people could hinder their efforts to seek help.
Participants described seeking help for self-harm from higher education services, third-sector organisations, the internet, and private services: ‘Last year when that did happen, I made a counselling [university service] appointment, I think I went to one or two and then I just picked myself back up.’ (Hannah, 19 years) ‘Getting to a point where I was contacting Samaritans like every single day, erm, and getting fed up and reaching the point where like I wouldn’t seek help and you know; I wouldn’t be alive kind of thing.’ (Ian, 21 years) Participants also described seeking help and support from online resources: ‘I looked at a few online resources, so pamphlets … and I think that was quite helpful to relax me about the situation I was in and what support systems are available out there, so I don’t feel alone; I think it just helped me be nudged in the right direction to at least try.’ (Jemima, 25 years) Ian detailed that he had sought private psychological therapy following dissatisfaction with statutory services: ‘I had a further six assessments with them [community mental health team] that came to nothing and I reached a point where I was just like you know … I need to do something, like this is becoming a critical stage and so I was like … I need private therapy, I need something to keep me alive.’ (Ian, 21 years) Young people described barriers to help-seeking from non-statutory services. Marie shared her experience of seeing up to eight counsellors, but she explained she needed to be ready to seek help for self-harm, highlighting a challenge young people may face when struggling with self-harm: ‘I don’t think I was quite there in my mind that I was as worse as I was or bad as I was … I know last year for example I was in not a very good place at all and I now know that even though I don’t like it, necessarily talking to counsellors … it will help me in the long run.’ (Marie, 19 years)
Some participants described experiences of seeking help from primary care services: ‘And the first person I spoke to, was the pharmacist … he was totally calm about it … but it was the changing point in my life that I actually realised that it’s not something to be ashamed of.’ (Kate, 22 years) Participants stated how they sought help for self-harm through the NHS community-based Improving Access to Psychological Therapies service. Many participants shared negative experiences: ‘I think the first one was erm, there was a CBT [cognitive behavioural therapy] experiment afterwards, these are in like a year break of each other but erm, the CBT person, they wanted to do it over the phone which I found more difficult to begin with and then they were half an hour late for the appointment on the phone so I found that like “okay, you’re not going to turn up to a phone appointment on time then I don’t think that this would work”.’ (Gemma, 25 years) As Gemma highlights, psychological therapies over the telephone does not suit some young people. Participants also described varied experiences of seeking help through NHS mental health services: ‘The first time I spoke to someone about it, it was honestly the most useless … first of all they just told me I was being attention seeking [Child and Adolescent Mental Health Services counsellor] so I just kind of, yeah … it took me a while to look for help again … she wasn’t really listening to what I was saying and as she was just finishing the sentences for me.’ (Emily, 23 years) A separate experience was shared by Emily that was in contradiction to her first experience with mental health services: ‘… they [mental health access team] did actually like em, ‘cause it was the first time that kind of, that I actually thought that I was being listened to and like, they were trying to actually figure out ways to help me rather than just completing my thoughts.’ (Emily, 23 years) Some participants felt frustrated when they were deemed not eligible for NHS psychological therapy services after being referred by clinicians as they struggled with self-harm: ‘I’ve been referred to psychotherapy ‘cause of my diagnosis [diagnosis not disclosed] and they’ve gone, “we can do eight sessions of CBT but we don’t think it’s going to achieve anything and you’re still hurting yourself and it’s against our policy to do that”.’ (Divya, 23 years)
Young people reflected on their experiences consulting GPs for self-harm. They described what influenced future help-seeking, and how preconceptions of GP care, knowledge on self-harm and of accessing general practice, and fear of consulting GPs were barriers to accessing general practice care. Expectations not met Some participants vividly described experiences of feeling that a GP did not fully explore their problem and seemed to rely on prescribing as a management option: ‘But he sort of went “okay we’ll just put you on anti-depressants and see you every two weeks and let’s see what does, see if your mood increases, if anything happens, if you stop self-harming, if things decrease” … I ended up maxing out on the amount you can get with anti-depressants within like six months, and they weren’t sitting well with me.’ (Gemma, 25 years) Some accounts revealed tensions in perspectives of experience of previous consultations: ‘It was … hmphh [small sigh] … it was pretty positive, I mean he was, he was understanding, very non-judgemental, warm, I felt comfortable telling him everything … one thing that did not feel quite right was the way he responded … like I told him I don’t know … “I have a sore throat”.’ (Lucy, 24 years) Lucy suggested that although she found the GP to be considerate and this supported her self-harm disclosure, she thought he responded casually to her disclosure of self-harm. Preconceptions and fears Young people held preconceived views of GPs’ care for self-harm, which acted as a barrier and hindered them seeking support for self-harm through general practice: ‘From what I’ve noticed from others and my own experiences is that they don’t really get good experiences straight away, and then like I said, it took me such a long time before I actually tried again.’ (Emily, 23 years) Young people vividly shared fears of being admitted to hospital, loss of confidentiality, and stigma as barriers for accessing support from general practice: ‘I thought they’d hospitalise me immediately. I thought they’d panic and push me away as if, “no, you’ve gotta — you know, you’ve got to go into an inpatient unit, and we’ve got to inform your family, and you’ve got to quit your course”.’ (Kate, 22 years) ‘Erm, it was really, really difficult erm, it’s hard enough trying to get an appointment these days, erm, and it’s when they ask you on the phone, “can I ask you like why you need this appointment”, I just lied and just said, you know, I feel ill … I felt a lot of shame in it.’ (Ian, 21 years) As highlighted by Ian, a fear of stigma around self-harm was found to be a barrier to accessing care. Experiences of consulting GPs influence help-seeking Young people reflected on the importance of GPs’ responses to self-harm when discussing self-harm in the consultation and provided insights into the impact these had on help-seeking, highlighting the potential consequences of a negatively perceived GP consultation. Some participants described that their experience of seeing GPs for self-harm recursively affected their decision to seek help for self-harm in the future: ‘Err I dunno it was easy to get a doctor’s appointment but the help I got it wasn’t any help as it then put me back another three years until I got help again … like they printed out a form about manic depression and generalised anxiety disorder and then that was it and that’s all I got … nothing and then I never bothered until another three years later.’ (Catherine, 21 years) Other participants, however, reflected on how positive experiences of consulting a GP had resulted in them seeking help from GPs in the future: ‘I had a really great experience, being able to share that with someone new, also my comfort zone and he [GP] just became my support so if I ever had to have an appointment, I just went to him for continuity.’ (Jemima, 25 years) Some participants shared that if they felt self-harm was dismissed by GPs after it was mentioned in the consultation, their future help-seeking behaviour changed as a result: ‘I left the conversation feeling perhaps I was assigning more importance to this that it requires … because I said “if the GP is not too concerned, I shouldn’t be” … I felt I needed to tell him … that I’m actually overdosing on them [prescribed antidepressants] … I did tell him, and once again I didn’t get any reaction … so I decided to stop my medication without telling him and I never attended another appointment with him … I’ve never been to see the GP since and it’s been six months.’ (Lucy, 24 years) Lack of knowledge on self-harm and accessing support Participants also described how a lack of knowledge about self-harm and its risks, and accessing care in general practice was a barrier to seeking support from general practice: ‘There’s sort of a lack of knowledge around the healthcare system works and booking GP appointments is scary … also it’s sort of around knowledge where I didn’t necessarily realise that me self-harming was wrong or was a sign that I will [do so] for a very long time.’ (Divya, 23 years) These barriers are mapped to a candidacy framework and presented in .
Some participants vividly described experiences of feeling that a GP did not fully explore their problem and seemed to rely on prescribing as a management option: ‘But he sort of went “okay we’ll just put you on anti-depressants and see you every two weeks and let’s see what does, see if your mood increases, if anything happens, if you stop self-harming, if things decrease” … I ended up maxing out on the amount you can get with anti-depressants within like six months, and they weren’t sitting well with me.’ (Gemma, 25 years) Some accounts revealed tensions in perspectives of experience of previous consultations: ‘It was … hmphh [small sigh] … it was pretty positive, I mean he was, he was understanding, very non-judgemental, warm, I felt comfortable telling him everything … one thing that did not feel quite right was the way he responded … like I told him I don’t know … “I have a sore throat”.’ (Lucy, 24 years) Lucy suggested that although she found the GP to be considerate and this supported her self-harm disclosure, she thought he responded casually to her disclosure of self-harm.
Young people held preconceived views of GPs’ care for self-harm, which acted as a barrier and hindered them seeking support for self-harm through general practice: ‘From what I’ve noticed from others and my own experiences is that they don’t really get good experiences straight away, and then like I said, it took me such a long time before I actually tried again.’ (Emily, 23 years) Young people vividly shared fears of being admitted to hospital, loss of confidentiality, and stigma as barriers for accessing support from general practice: ‘I thought they’d hospitalise me immediately. I thought they’d panic and push me away as if, “no, you’ve gotta — you know, you’ve got to go into an inpatient unit, and we’ve got to inform your family, and you’ve got to quit your course”.’ (Kate, 22 years) ‘Erm, it was really, really difficult erm, it’s hard enough trying to get an appointment these days, erm, and it’s when they ask you on the phone, “can I ask you like why you need this appointment”, I just lied and just said, you know, I feel ill … I felt a lot of shame in it.’ (Ian, 21 years) As highlighted by Ian, a fear of stigma around self-harm was found to be a barrier to accessing care.
Young people reflected on the importance of GPs’ responses to self-harm when discussing self-harm in the consultation and provided insights into the impact these had on help-seeking, highlighting the potential consequences of a negatively perceived GP consultation. Some participants described that their experience of seeing GPs for self-harm recursively affected their decision to seek help for self-harm in the future: ‘Err I dunno it was easy to get a doctor’s appointment but the help I got it wasn’t any help as it then put me back another three years until I got help again … like they printed out a form about manic depression and generalised anxiety disorder and then that was it and that’s all I got … nothing and then I never bothered until another three years later.’ (Catherine, 21 years) Other participants, however, reflected on how positive experiences of consulting a GP had resulted in them seeking help from GPs in the future: ‘I had a really great experience, being able to share that with someone new, also my comfort zone and he [GP] just became my support so if I ever had to have an appointment, I just went to him for continuity.’ (Jemima, 25 years) Some participants shared that if they felt self-harm was dismissed by GPs after it was mentioned in the consultation, their future help-seeking behaviour changed as a result: ‘I left the conversation feeling perhaps I was assigning more importance to this that it requires … because I said “if the GP is not too concerned, I shouldn’t be” … I felt I needed to tell him … that I’m actually overdosing on them [prescribed antidepressants] … I did tell him, and once again I didn’t get any reaction … so I decided to stop my medication without telling him and I never attended another appointment with him … I’ve never been to see the GP since and it’s been six months.’ (Lucy, 24 years)
Participants also described how a lack of knowledge about self-harm and its risks, and accessing care in general practice was a barrier to seeking support from general practice: ‘There’s sort of a lack of knowledge around the healthcare system works and booking GP appointments is scary … also it’s sort of around knowledge where I didn’t necessarily realise that me self-harming was wrong or was a sign that I will [do so] for a very long time.’ (Divya, 23 years) These barriers are mapped to a candidacy framework and presented in .
Listening and acting Participants described positive consultation experiences when GPs were proactive in assessing and managing their self-harm: ‘Yeah, yeah, yeah it was erm, yeah, my GP here is very helpful, he gives me different, he gives me options and then explains to me which erm, how each go, which things that would be best for me.’ (Emily, 23 years) Some participants shared experiences of feeling that GPs were active listeners and non-judgemental, and involved them in shared decision making: ‘His patience and lack of judgement was amazing, just to listen to my experiences of what happens for emotionally when I’m self-harming, erm, it was incredible.’ (Kate, 22 years) ‘He didn’t over-react … he was really good in the way he handled things … the way he felt comfortable to talk to me about it made me comfortable, even though I didn’t feel anything at that time, I didn’t feel as though I was being judged … I was on citalopram and he discussed in detail what the side effects were, what would happen, what the benefit of sertraline were and he said, if you need me to speak to your therapist, I will … he was amenable to helping me with my self-harm.’ (Jemima, 25 years) Being understood Young people described wanting to be understood by GPs and be treated as an individual: ‘It just feels with the GPs, very erm like almost like they are reading from a script with it … as opposed to talking with you about it.’ (Marie, 19 years) ‘He could have found out more, asked to find out more and then talked to me more, or at least talked to me about maybe what I wanted to do.’ (Hannah, 19 years) Participants reflected that they want GPs to personalise their care and support to them, which may facilitate help-seeking in young people, and reduce self-harm behaviour. Relationship-based care An important facilitator identified was an ongoing relationship and continuity with GPs for self-harm care: ‘Definitely a key part is the rapport between the GP and the err … so definitely continuity, relational continuity definitely plays a role in that, especially I would have loved to have a GP I’ve known for a couple of years, perhaps I could have prevented this whole thing from happening.’ (Lucy, 24 years) Some participants also suggested that longer GP consultations would support young people accessing general practice self-harm care, and this can support relationship-based care: ‘Continuity and a good frequency of GP appointments is really helpful. You don’t build up a rapport in one appointment, it’s ten minutes, and some place it’s five minutes, you need time to do that, quite often they’ll book double appointments knowing that I’ve only got one problem.’ (Kate, 22 years) Shorter waiting times Participants described that shorter waiting times to see GPs would facilitate access to care for young people with self-harm behaviour: ‘When you self-harm you like you don’t wanna wait two weeks, you need to see them there or then, or not at all … like it’s very instant, so like if you’re gonna self-harm or have self-harmed, there is no point seeing them in two weeks.’ (Catherine, 21 years)
Participants described positive consultation experiences when GPs were proactive in assessing and managing their self-harm: ‘Yeah, yeah, yeah it was erm, yeah, my GP here is very helpful, he gives me different, he gives me options and then explains to me which erm, how each go, which things that would be best for me.’ (Emily, 23 years) Some participants shared experiences of feeling that GPs were active listeners and non-judgemental, and involved them in shared decision making: ‘His patience and lack of judgement was amazing, just to listen to my experiences of what happens for emotionally when I’m self-harming, erm, it was incredible.’ (Kate, 22 years) ‘He didn’t over-react … he was really good in the way he handled things … the way he felt comfortable to talk to me about it made me comfortable, even though I didn’t feel anything at that time, I didn’t feel as though I was being judged … I was on citalopram and he discussed in detail what the side effects were, what would happen, what the benefit of sertraline were and he said, if you need me to speak to your therapist, I will … he was amenable to helping me with my self-harm.’ (Jemima, 25 years)
Young people described wanting to be understood by GPs and be treated as an individual: ‘It just feels with the GPs, very erm like almost like they are reading from a script with it … as opposed to talking with you about it.’ (Marie, 19 years) ‘He could have found out more, asked to find out more and then talked to me more, or at least talked to me about maybe what I wanted to do.’ (Hannah, 19 years) Participants reflected that they want GPs to personalise their care and support to them, which may facilitate help-seeking in young people, and reduce self-harm behaviour.
An important facilitator identified was an ongoing relationship and continuity with GPs for self-harm care: ‘Definitely a key part is the rapport between the GP and the err … so definitely continuity, relational continuity definitely plays a role in that, especially I would have loved to have a GP I’ve known for a couple of years, perhaps I could have prevented this whole thing from happening.’ (Lucy, 24 years) Some participants also suggested that longer GP consultations would support young people accessing general practice self-harm care, and this can support relationship-based care: ‘Continuity and a good frequency of GP appointments is really helpful. You don’t build up a rapport in one appointment, it’s ten minutes, and some place it’s five minutes, you need time to do that, quite often they’ll book double appointments knowing that I’ve only got one problem.’ (Kate, 22 years)
Participants described that shorter waiting times to see GPs would facilitate access to care for young people with self-harm behaviour: ‘When you self-harm you like you don’t wanna wait two weeks, you need to see them there or then, or not at all … like it’s very instant, so like if you’re gonna self-harm or have self-harmed, there is no point seeing them in two weeks.’ (Catherine, 21 years)
Summary To the authors’ knowledge, this is one of the first studies to explore young people’s experiences of care and their views on access to care in general practice for self-harm. Young people described avenues of help-seeking encompassing significant others, non-statutory services, and NHS services. Young people reflected on poor GP experiences, and how these influenced future help-seeking. Prior views, a lack of knowledge, and fear were further barriers identified for help-seeking. GPs listening, taking action, showing understanding, and providing relationship-based care, as well as shorter GP waiting times were facilitators to accessing general practice care. Strengths and limitations This study provides new and rich insights in a general practice context about young people who self-harm. The analysis approach used allowed for a fluid analytical process ensuring the full richness of the data were explored in generating themes close to the original data. – The researcher shared his professional background with participants early on and to build trust and rapport; however, it is acknowledged that doing so may influence the content of participants’ narratives. – The involvement of the patient and public involvement advisory group in the interpretation of findings allowed the inclusion of lay perspectives, thereby improving the relevance and validity of findings. Limitations include the possibility of selection bias, as participants who were willing to be interviewed may hold different views from those who were not. Interviews were conducted at one time-point in the young person’s journey of self-harm care. It is therefore not possible to understand how experiences and perspectives progressed over time. Despite efforts to recruit a diverse sample of young people, a convenience sample may not be representative of all young people who self-harm; young people in this study were nearly all female and in further or higher education. Comparison with existing literature A systematic review found adolescents (aged 11–19 years) turned to informal sources of support, such as families and friends for self-harm; this is congruent with the findings in the current study that one avenue of help-seeking young people turn to is family and friends. Parents, families, friends, and significant others, however, can be a barrier to help-seeking, and this finding builds on existing evidence that parents can have a negative impact on help-seeking in young people. In this study young people found their ‘candidacy’ as an enabler and barrier to seeking help. – This study found mixed experiences of seeing GPs for self-harm and young people wanted to be understood by GPs, which is similar to the findings of Bellairs-Walsh et al , although their study was not solely about self-harm. The importance of the GP response and how this had an impact on the decision to seek future help was highlighted in the current study ( ), corresponding to the adjudication stage of the candidacy framework — the judgements and decisions made by professionals that allow or inhibit continued progression of candidacy. – , This study found that experiences of consulting GPs for self-harm recursively affected future help-seeking from GPs, and in one instance it took one young person 3 years to seek help from a GP after a negatively perceived consultation. This has also been described in young people who are suicidal. A fear of losing confidentiality was identified as a barrier to accessing care, and this has also been reported in young people with suicidal behaviour and self-harm, and for mental health concerns in young people more widely. , – Relationship-based care, an identified facilitator for young people accessing self-harm care, is valued by patients and the Royal College of General Practitioners, and associated with improved patient satisfaction and reduced mortality. – Continuity of GP care supports the building of a trusting, therapeutic relationship between GP and patient, which may help young people reduce self-harm behaviour. Implications for research and practice Interviews with GPs to explore their views on managing young people who self-harm is important, and further qualitative research with young people, including those from socioeconomically deprived communities and not in higher education, using a longitudinal design will provide understanding of experiences, critical moments, and changes in self-harm over time. These findings can inform general practice-based interventions for young people who self-harm and primary care self-harm models of care, as outlined in the NHS Long Term Plan . As highlighted by the Samaritans, there should be the availability of evidence-informed high-intensity psychological therapies to support GP management. Young people require clear public health messages about self-harm, its risks, accessing general practice, and confidentiality. This can be done through the GP consultation, practice-level dissemination, Public Health England, local authorities, and NHS England. GPs need to adhere to current NICE guidance on self-harm management and in , the authors suggest recommendations informed by this study for GPs managing self-harm in young people. Young people want GPs who have the ability to establish and maintain relationship-based care for individuals who self-harm, and practices that are flexible when booking appointments may facilitate this. –
To the authors’ knowledge, this is one of the first studies to explore young people’s experiences of care and their views on access to care in general practice for self-harm. Young people described avenues of help-seeking encompassing significant others, non-statutory services, and NHS services. Young people reflected on poor GP experiences, and how these influenced future help-seeking. Prior views, a lack of knowledge, and fear were further barriers identified for help-seeking. GPs listening, taking action, showing understanding, and providing relationship-based care, as well as shorter GP waiting times were facilitators to accessing general practice care.
This study provides new and rich insights in a general practice context about young people who self-harm. The analysis approach used allowed for a fluid analytical process ensuring the full richness of the data were explored in generating themes close to the original data. – The researcher shared his professional background with participants early on and to build trust and rapport; however, it is acknowledged that doing so may influence the content of participants’ narratives. – The involvement of the patient and public involvement advisory group in the interpretation of findings allowed the inclusion of lay perspectives, thereby improving the relevance and validity of findings. Limitations include the possibility of selection bias, as participants who were willing to be interviewed may hold different views from those who were not. Interviews were conducted at one time-point in the young person’s journey of self-harm care. It is therefore not possible to understand how experiences and perspectives progressed over time. Despite efforts to recruit a diverse sample of young people, a convenience sample may not be representative of all young people who self-harm; young people in this study were nearly all female and in further or higher education.
A systematic review found adolescents (aged 11–19 years) turned to informal sources of support, such as families and friends for self-harm; this is congruent with the findings in the current study that one avenue of help-seeking young people turn to is family and friends. Parents, families, friends, and significant others, however, can be a barrier to help-seeking, and this finding builds on existing evidence that parents can have a negative impact on help-seeking in young people. In this study young people found their ‘candidacy’ as an enabler and barrier to seeking help. – This study found mixed experiences of seeing GPs for self-harm and young people wanted to be understood by GPs, which is similar to the findings of Bellairs-Walsh et al , although their study was not solely about self-harm. The importance of the GP response and how this had an impact on the decision to seek future help was highlighted in the current study ( ), corresponding to the adjudication stage of the candidacy framework — the judgements and decisions made by professionals that allow or inhibit continued progression of candidacy. – , This study found that experiences of consulting GPs for self-harm recursively affected future help-seeking from GPs, and in one instance it took one young person 3 years to seek help from a GP after a negatively perceived consultation. This has also been described in young people who are suicidal. A fear of losing confidentiality was identified as a barrier to accessing care, and this has also been reported in young people with suicidal behaviour and self-harm, and for mental health concerns in young people more widely. , – Relationship-based care, an identified facilitator for young people accessing self-harm care, is valued by patients and the Royal College of General Practitioners, and associated with improved patient satisfaction and reduced mortality. – Continuity of GP care supports the building of a trusting, therapeutic relationship between GP and patient, which may help young people reduce self-harm behaviour.
Interviews with GPs to explore their views on managing young people who self-harm is important, and further qualitative research with young people, including those from socioeconomically deprived communities and not in higher education, using a longitudinal design will provide understanding of experiences, critical moments, and changes in self-harm over time. These findings can inform general practice-based interventions for young people who self-harm and primary care self-harm models of care, as outlined in the NHS Long Term Plan . As highlighted by the Samaritans, there should be the availability of evidence-informed high-intensity psychological therapies to support GP management. Young people require clear public health messages about self-harm, its risks, accessing general practice, and confidentiality. This can be done through the GP consultation, practice-level dissemination, Public Health England, local authorities, and NHS England. GPs need to adhere to current NICE guidance on self-harm management and in , the authors suggest recommendations informed by this study for GPs managing self-harm in young people. Young people want GPs who have the ability to establish and maintain relationship-based care for individuals who self-harm, and practices that are flexible when booking appointments may facilitate this. –
|
Plant species identity and plant-induced changes in soil physicochemistry—but not plant phylogeny or functional traits - shape the assembly of the root-associated soil microbiome | 63d680fa-7b22-4240-98c7-fa8d34df5e07 | 10589101 | Microbiology[mh] | The functional activities of the root-associated soil microbiome are of fundamental importance for plant health. The root-associated soil microbiome influences plant nutrient acquisition, pathogen defense, induced systemic resistance, growth, drought tolerance, and other traits (Berendsen et al. , Philippot et al. , Pieterse et al. ). Given the importance of the microbiome to plant fitness and health, soil fertility, and ecosystem productivity, defining the key processes that shape microbial assembly has been an ongoing pursuit within microbial ecology (Marschner et al. , Lundberg et al. , Pérez-Jaramillo et al. , Fitzpatrick et al. ). Whilst many studies have performed observational research to describe patterns of microbial assembly and community ecology (Prosser ), fewer studies have performed direct manipulations in controlled experiments to examine the factors shaping root-associated soil microbial assembly during a plant’s early phases of growth and establishment. Several studies have proposed that plant species (PS) themselves are the primary determinants of their associated microbiome (Becklin et al. , Mendes et al. , Bouffaud et al. , Chaparro et al. ). Theoretically, as the phylogenetic relatedness of PS influences their degree of shared developmental and functional traits, it may also influence the phylogenetic similarity of the microorganisms that they recruit. Thus, with increasing phylogenetic similarity among PS, one may observe an increased relatedness of their microbiome. Findings supporting this hypothesis have been observed in several studies (Bouffaud et al. , Lambais et al. , Lei et al. , Hartman et al. ). In contrast, other studies have observed that the abiotic conditions of the soil environment (i.e. soil type, pH, nutrient availability, and C:N ratio) are of greater influence on microbial community assembly in the rhizosphere and root-associated soil environment (Girvan et al. , Ulrich and Becker , Lauber et al. , Xiao et al. , Yeoh et al. , Veach et al. , Ren et al. ). This ecological conundrum draws parallels to the nature-versus-nurture debate that has shaped research around human development for decades. Several studies have proposed an assembly model more akin to nature-via-nurture, whereby both the PS and soil shape the microbiome, and the relative strength of these different drivers will vary depending on the specific ecological context (Garbeva et al. , Berg and Smalla , Tkacz et al. , Müller et al. , Lee and Hawkes ). In the nature-via-nurture model, soil provides the primary source of microbial inoculum available to plants and sets the boundaries from which plants may select their microbiome. The dominant influence of soil type and edaphic properties on determining the broad patterns of microbial biogeography was recognized by Fierer and Jackson ( ) and Lauber et al. ( ). However, throughout their development and life span, plants and their root systems exert species-specific influences on the rhizosphere and root-associated soil environment, which drives environmental filtering of their microbiome (Berg and Smalla , Chaparro et al. , Reinhold-Hurek et al. , Hu et al. ). Additionally, the symbiotic associations of plant hosts (e.g. N 2 -fixing rhizobia, arbuscular mycorrhizal fungi) have been identified to shape the assembly of the root microbiome (Hartman et al. ). There have been two primary processes proposed that shape microbiomes: deterministic and stochastic (Goss-Souza et al. ). Niche-based, deterministic models propose that the biotic and abiotic conditions of the local environment drive microbial selection (Carroll et al. , Goss-Souza et al. ). Deterministic models can be further split into primary and secondary processes. Primary deterministic processes constitute a more direct mechanism, whereby the release of plant-specific rhizo-deposits selects or favours microbial taxa from the wider soil microbial community (Hu et al. , Sasse et al. , Zhalnina et al. ). Secondary deterministic processes function indirectly whereby plant roots modify the general rhizosphere and soil conditions (pH, available P, nitrogen, etc.), and these changes, in turn, encourage the growth of microorganisms best adapted to that modified habitat space (Hinsinger , Liang et al. , Bell et al. , van Veelen et al. , Hernández-Cáceres et al. ). In contrast to deterministic models, stochastic models propose an element of randomness to community assembly (Dini-Andreote et al. , Goss-Souza et al. ). These deterministic and stochastic processes do not occur independently of each other, and the challenge is determining the relative contribution of these under different experimental and ecological contexts. We performed a plant experiment whereby a broad phylogenetic range of 37 different PS were grown in an identical blended soil medium. Following 10 months of growth, the root-associated soil microbiome was characterized using the 16S rRNA gene and ITS region sequencing. The differences in the structural variance of the root-associated soil microbiome between PS were related to their phylogenetic and functional traits, as well as to any plant-induced changes in soil physicochemistry (SC) that occurred throughout the experiment. By performing this experiment, our research aimed to partition the influences of primary and secondary deterministic processes (i.e. direct and indirect plant effects) and stochastic processes on microbial assembly in the root-associated soil environment. Furthermore, we hypothesize that the phylogenetic relatedness of the PS will be positively correlated to the phylogenetic similarity of their root-associated soil microbiome.
Plant and soil sample collection Plants from 37 different species were grown in a blended soil media or obtained from a commercial nursery (Southern Woods Plant Nursery, New Zealand). The selected PS covered a broad range of phylogenetic groups and included representatives from three plant classes, 12 orders, 14 families, and 31 genera. The PS covered a range of different life spans (annual, perennial, or long-lived), functional groups (e.g. grass, shrub, or tree), and provenances (exotic or native to Aotearoa New Zealand). They also included species with different mycorrhizal associations (arbuscular, AMF, ectomycorrhizal, EMF, and no association) and N 2 fixation (presence or absence). Plant metadata was primarily obtained from PS profiles on the New Zealand Plant Conservation Network ( https://www.nzpcn.org.nz/ ) and from literature searches where additional information was needed. The full list of PS used in this study and their associated metadata are provided in . The seeds or cuttings of each PS were planted in individual 10-L pots containing a blend of field-collected live soils mixed with a pasteurized soil: sand carrier for bulk. Between 12 and 20 replicate pots were established for each PS. The collection of the live soils was conducted across 12 sub-alpine, grass, and shrub-dominated sites that included the ranges of the plants being experimentally evaluated. This sampling design was performed to allow for the microbiomes associated with these species to be available for plant ‘recruitment’. However, it is important to state that we did not examine the microbial background of these ‘live’ soils before setting up the experiment. Details regarding the collection, handling, treatment, and mixing of these soils were first provided by Wakelin et al. ( ). The plants were randomized within a glasshouse and grown with regular watering and supplemental lighting when required. No fertilizers or other chemicals were added to the pots, and weeds were removed when apparent. After 10 months of plant growth, root-associated soil samples were collected from each plant pot. Samples were collected from between four and seven replicate pots of each plant depending on plant availability [i.e. plants that had grown to full health and were not required for other research (Wakelin et al. )]. All root-associated soil samples were collected aseptically by pressing an open 50-mL conical centrifuge tube into the soil adjacent to the stem(s) of the plant in each pot directly into the root zone. Three samples from around the circumference of each pot were collected and pooled to provide a single sample for each replicate of each PS. Pooled soil samples were sieved to 2 mm and stored at either 4°C until physicochemical analysis or -80°C until DNA extraction. Measurements of soil edaphic properties The edaphic properties of all root-associated soil samples were characterized at Hill Laboratories (Christchurch, New Zealand), where soil pH, Olsen phosphorus (mg/L), sulphate sulphur (mg/kg), total carbon (TC; %), organic matter (%), total nitrogen (TN; %), C:N ratio, potentially available N (kg/ha), anaerobically mineralizable N (AMN; %), AMN:TN ratio, and volume weight (g/mL) were determined using the protocols described by Wakelin et al. ( ). Plant DNA extraction and matK gene sequencing The DNA of each PS was extracted using the DNeasy Plant Mini Kit (QIAGEN), utilizing cryogenic tissue grinding of plant leaves with a sterilized mortar and pestle in liquid nitrogen. For phylogenetic inference, the Maturase K gene (matK) was amplified using the primers MatK472F (5′-CCRTCATCTGGAAATCTTGGTT-3′) and MatK1248R (5′-GCTRTRATAATGAGAAAGATT TCTGC-3′) (Fatima et al. ). PCR conditions consisted of an initial denaturation step of 94°C for 5 min followed by 35 cycles of 94°C for 30 sec, 56°C for 30 sec, and 72°C for 42 sec, followed by a final extension at 72°C for 10 min. The PCR reaction mixture consisted of 1 × PCR buffer, 0.5 mmol L −1 dNTPs, 0.25 μmol L −1 of each primer, 1 U Taq polymerase, and 5–50 ng of template DNA. The PCR products were purified using the QIAquick PCR Purification Kit, and the purified DNA was sequenced using Sanger sequencing at Macrogen (Seoul, Korea). The quality of the sequencing data was checked and edited using Sequencer software version 5.4.6 (Genecodes Corp, Ann Arbor, MI, USA). MEGA X (Kumar et al. ) was then used for sequence alignment and phylogenetic analysis. Briefly, matK gene‐based sequences were aligned using MUSCLE, and overhanging nucleotides were removed and then re-aligned. Distance matrices and phylogenetic trees were constructed using the maximum likelihood method and the Tamura–Nei model (Tamura and Nei ). Soil DNA extraction and 16S rRNA gene/ITS region sequencing Soil DNA was extracted from 0.25 g of soil using a DNeasy PowerSoil Kit (QIAGEN) according to the manufacturer’s protocol and quantified using a Nanodrop spectrophotometer. Subsequent Illumina amplicon sequencing followed the Earth Microbiome Project’s (EMP) protocol (Caporaso et al. ). In short, the bacterial 16S rRNA gene was amplified using the primers 515F (5′- GTGYCAGCMGCCGCGGTAA -3′) and 806R (5′- GGACTACNVGGGTWTCTAAT -3′) targeting the V4–V5 regions as described previously (Apprill et al. , Parada et al. ). The fungal ITS region was amplified using the primers ITS1f (5′- CTTGGTCATTTAGAGGAAGTAA -3′) and ITS2 (5′- GCTGCGTTCTTCATCGATGC -3′) as described previously (Bokulich and Mills , Hoggard et al. ). After PCR amplification, samples were purified using a Magnetic Bead PCR Cleanup Kit (GeneaidTM) and pooled in equimolar concentrations. The purified PCR products were used to prepare DNA libraries following the Illumina TruSeq DNA library preparation protocol using the Illumina MiSeq Reagent Kit v2. Illumina sequencing was performed at the Australian Genome Research Facility (Melbourne, Australia) using 2 × 150 bp pair-end chemistry on a MiSeq platform following the manufacturer’s guidelines. Statistical analysis Following sequencing, paired-end fastQ files were processed into amplicon sequence variants (ASVs) using the DADA2 version 1.18 workflow (Callahan et al. ). Briefly, the forward and reverse reads were quality-filtered, trimmed, and denoised before being merged into ASVs. Chimeric ASVs were removed, and taxonomies were assigned to each ASV using the Ribosomal Database Project (RDP) Classifier (Wang et al. ) and the UNITE (Abarenkov et al. ) databases. Following DADA2 processing, ASV count tables were filtered to remove unidentified and unwanted phyla (i.e. Cyanobacteria/Chloroplasts) and singletons. The ASV count tables were rarefied to adjust for differences in library size between samples. Before rarefaction, samples with low read counts were removed to avoid excessive data loss. Rarefaction curves displaying the number of ASVs in each sample have been provided in . The number of replicates per PS that were included in the rarefied 16S (henceforth reported as ‘bacterial’) and ITS (henceforth reported as ‘fungal’) ASV datasets is displayed in . In total, all the PS had at least three replicates in the rarefied fungal ASV dataset. In the rarefied bacterial ASV dataset, 35 out of the 37 PS had at least three replicates; however, only two replicates remained for the PS Chionochloa conspicua and Trifolium repens following rarefaction. The rarefied bacterial and fungal ASV datasets were analysed separately using the multivariate statistical analyses outlined below. Maximum likelihood phylogenetic trees were built using FastTree2 (Price et al. ). To provide estimates of alpha diversity, Faith’s phylogenetic diversity (PD) and species richness (SR) were calculated for each sample in Picante R (Kembel et al. ). The PD index assesses the PD of a community and is defined as the sum of the total phylogenetic branch length separating taxa in a community (Faith , Kembel et al. ). In contrast, the SR index calculates the total number of taxa in a community based on their identity alone—no phylogenetic information is factored into the calculation. The differences in the PD and SR index between plant host-related factors were tested for significance using Kruskal–Wallis tests and pairwise Wilcoxon tests with Bonferroni correction. To estimate the phylogenetic distances in microbial community composition between samples, weighted UniFrac distances were calculated on rarefied bacterial and fungal ASV count tables (Lozupone et al. ). Differences in bacterial and fungal community composition between the plant-related factors were tested for significance using permutational multiple analysis of variance (PERMANOVA) on distance metrics using the adonis2 (by = ‘terms’) function in vegan R (Oksanen et al. ) and pairwiseAdonis (Martinez Arbizu ). The differences in community composition were visualized using non-metric multidimensional scaling (NMDS) ordination plots. To estimate the within-group variance amongst samples, the average distance of individual samples to the group centroid (beta dispersion) was calculated using the betadisper function in phyloseq R (McMurdie and Holmes ). Permutation tests were used to determine significant differences in the within-group variance between plant-related factors. The weighted UniFrac distances were correlated to differences in soil physicochemical properties using Mantel tests. In addition, weighted UniFrac distances were correlated to matrices of matK sequence similarity using Mantel tests. MatK similarity matrices were constructed to represent the phylogenetic relatedness between the different PS under investigation. Observations of the phylogenetic tree generated from the matK phylogeny showed sensible grouping of PS to their taxonomic positioning. Hierarchical clustering analysis was performed on the weighted UniFrac distance matrices and matK distance matrices using the complete linkage method in Stats R. Following this, dendrograms were constructed to visually compare differences in the clustering patterns of PS based on the weighted UniFrac distances of their fungal and bacterial communities versus their matK distances in ape R (Paradis and Schliep ). Variance partitioning (VP) analysis was performed in vegan R to partition the variance observed in bacterial and fungal community composition (as represented by weighted UniFrac distances) to the plant-related factors (Oksanen ). Four explanatory matrices were constructed to represent the different influencing factors. These were: PS; plant life history (PLH) (i.e. provenance + life span + functional group); plant rhizosphere traits (PRT) (i.e. mycorrhizal association + N 2 fixation); and SC. All unexplained (residual) variation from VP analysis was tentatively assigned to represent the influence of stochastic processes. Following VP analysis, distance-based redundancy analysis (db-RDA) was performed to test the significance of each explanatory matrix whilst conditioning for the other three matrices. In addition, forward stepwise selection was performed to identify the soil physicochemical properties that best accounted for the community variance that was partitioned to the influence of SC. Pairwise differences in the soil physicochemical properties selected by the forward selection model between the 37 different PS were identified using pairwise t -tests with Holm correction. The rarefied bacterial and fungal ASV tables were used as input for differential abundance analysis. First, the R package pime was used to select bacterial and fungal ASVs that best defined the microbiome of each PS (Luiz Fernando ). Prevalence intervals with an out-of-bag (OOB) error rate of 0% were selected as cut-offs. For fungal ASVs, this was a prevalence of 75%, which retained 217 ASVs and 1 168 389 sequences. For bacterial ASVs, this was a prevalence of 80%, which retained 771 ASVs and 433 795 sequences. PIME-filtered ASV count tables were used as input for differential abundance analysis using metagenomeSeq R (Paulson et al. ), where the log change estimate of each ASV between different PS was calculated using the fitLogNormal function. Significant differences in the log change estimates of ASVs between PS were determined using permutation tests ( n = 999) with correction for multiple comparisons using the Holm–Bonferroni method (holm). Heatmaps were produced using pheatmap R (Kolde and Kolde ) to display (a) the bacterial and fungal ASVs with significant log change estimates across PS and (b) the correlation shared between different PS (Pearson’s) based on the log change estimates of their bacterial and fungal ASVs.
Plants from 37 different species were grown in a blended soil media or obtained from a commercial nursery (Southern Woods Plant Nursery, New Zealand). The selected PS covered a broad range of phylogenetic groups and included representatives from three plant classes, 12 orders, 14 families, and 31 genera. The PS covered a range of different life spans (annual, perennial, or long-lived), functional groups (e.g. grass, shrub, or tree), and provenances (exotic or native to Aotearoa New Zealand). They also included species with different mycorrhizal associations (arbuscular, AMF, ectomycorrhizal, EMF, and no association) and N 2 fixation (presence or absence). Plant metadata was primarily obtained from PS profiles on the New Zealand Plant Conservation Network ( https://www.nzpcn.org.nz/ ) and from literature searches where additional information was needed. The full list of PS used in this study and their associated metadata are provided in . The seeds or cuttings of each PS were planted in individual 10-L pots containing a blend of field-collected live soils mixed with a pasteurized soil: sand carrier for bulk. Between 12 and 20 replicate pots were established for each PS. The collection of the live soils was conducted across 12 sub-alpine, grass, and shrub-dominated sites that included the ranges of the plants being experimentally evaluated. This sampling design was performed to allow for the microbiomes associated with these species to be available for plant ‘recruitment’. However, it is important to state that we did not examine the microbial background of these ‘live’ soils before setting up the experiment. Details regarding the collection, handling, treatment, and mixing of these soils were first provided by Wakelin et al. ( ). The plants were randomized within a glasshouse and grown with regular watering and supplemental lighting when required. No fertilizers or other chemicals were added to the pots, and weeds were removed when apparent. After 10 months of plant growth, root-associated soil samples were collected from each plant pot. Samples were collected from between four and seven replicate pots of each plant depending on plant availability [i.e. plants that had grown to full health and were not required for other research (Wakelin et al. )]. All root-associated soil samples were collected aseptically by pressing an open 50-mL conical centrifuge tube into the soil adjacent to the stem(s) of the plant in each pot directly into the root zone. Three samples from around the circumference of each pot were collected and pooled to provide a single sample for each replicate of each PS. Pooled soil samples were sieved to 2 mm and stored at either 4°C until physicochemical analysis or -80°C until DNA extraction.
The edaphic properties of all root-associated soil samples were characterized at Hill Laboratories (Christchurch, New Zealand), where soil pH, Olsen phosphorus (mg/L), sulphate sulphur (mg/kg), total carbon (TC; %), organic matter (%), total nitrogen (TN; %), C:N ratio, potentially available N (kg/ha), anaerobically mineralizable N (AMN; %), AMN:TN ratio, and volume weight (g/mL) were determined using the protocols described by Wakelin et al. ( ).
The DNA of each PS was extracted using the DNeasy Plant Mini Kit (QIAGEN), utilizing cryogenic tissue grinding of plant leaves with a sterilized mortar and pestle in liquid nitrogen. For phylogenetic inference, the Maturase K gene (matK) was amplified using the primers MatK472F (5′-CCRTCATCTGGAAATCTTGGTT-3′) and MatK1248R (5′-GCTRTRATAATGAGAAAGATT TCTGC-3′) (Fatima et al. ). PCR conditions consisted of an initial denaturation step of 94°C for 5 min followed by 35 cycles of 94°C for 30 sec, 56°C for 30 sec, and 72°C for 42 sec, followed by a final extension at 72°C for 10 min. The PCR reaction mixture consisted of 1 × PCR buffer, 0.5 mmol L −1 dNTPs, 0.25 μmol L −1 of each primer, 1 U Taq polymerase, and 5–50 ng of template DNA. The PCR products were purified using the QIAquick PCR Purification Kit, and the purified DNA was sequenced using Sanger sequencing at Macrogen (Seoul, Korea). The quality of the sequencing data was checked and edited using Sequencer software version 5.4.6 (Genecodes Corp, Ann Arbor, MI, USA). MEGA X (Kumar et al. ) was then used for sequence alignment and phylogenetic analysis. Briefly, matK gene‐based sequences were aligned using MUSCLE, and overhanging nucleotides were removed and then re-aligned. Distance matrices and phylogenetic trees were constructed using the maximum likelihood method and the Tamura–Nei model (Tamura and Nei ).
Soil DNA was extracted from 0.25 g of soil using a DNeasy PowerSoil Kit (QIAGEN) according to the manufacturer’s protocol and quantified using a Nanodrop spectrophotometer. Subsequent Illumina amplicon sequencing followed the Earth Microbiome Project’s (EMP) protocol (Caporaso et al. ). In short, the bacterial 16S rRNA gene was amplified using the primers 515F (5′- GTGYCAGCMGCCGCGGTAA -3′) and 806R (5′- GGACTACNVGGGTWTCTAAT -3′) targeting the V4–V5 regions as described previously (Apprill et al. , Parada et al. ). The fungal ITS region was amplified using the primers ITS1f (5′- CTTGGTCATTTAGAGGAAGTAA -3′) and ITS2 (5′- GCTGCGTTCTTCATCGATGC -3′) as described previously (Bokulich and Mills , Hoggard et al. ). After PCR amplification, samples were purified using a Magnetic Bead PCR Cleanup Kit (GeneaidTM) and pooled in equimolar concentrations. The purified PCR products were used to prepare DNA libraries following the Illumina TruSeq DNA library preparation protocol using the Illumina MiSeq Reagent Kit v2. Illumina sequencing was performed at the Australian Genome Research Facility (Melbourne, Australia) using 2 × 150 bp pair-end chemistry on a MiSeq platform following the manufacturer’s guidelines.
Following sequencing, paired-end fastQ files were processed into amplicon sequence variants (ASVs) using the DADA2 version 1.18 workflow (Callahan et al. ). Briefly, the forward and reverse reads were quality-filtered, trimmed, and denoised before being merged into ASVs. Chimeric ASVs were removed, and taxonomies were assigned to each ASV using the Ribosomal Database Project (RDP) Classifier (Wang et al. ) and the UNITE (Abarenkov et al. ) databases. Following DADA2 processing, ASV count tables were filtered to remove unidentified and unwanted phyla (i.e. Cyanobacteria/Chloroplasts) and singletons. The ASV count tables were rarefied to adjust for differences in library size between samples. Before rarefaction, samples with low read counts were removed to avoid excessive data loss. Rarefaction curves displaying the number of ASVs in each sample have been provided in . The number of replicates per PS that were included in the rarefied 16S (henceforth reported as ‘bacterial’) and ITS (henceforth reported as ‘fungal’) ASV datasets is displayed in . In total, all the PS had at least three replicates in the rarefied fungal ASV dataset. In the rarefied bacterial ASV dataset, 35 out of the 37 PS had at least three replicates; however, only two replicates remained for the PS Chionochloa conspicua and Trifolium repens following rarefaction. The rarefied bacterial and fungal ASV datasets were analysed separately using the multivariate statistical analyses outlined below. Maximum likelihood phylogenetic trees were built using FastTree2 (Price et al. ). To provide estimates of alpha diversity, Faith’s phylogenetic diversity (PD) and species richness (SR) were calculated for each sample in Picante R (Kembel et al. ). The PD index assesses the PD of a community and is defined as the sum of the total phylogenetic branch length separating taxa in a community (Faith , Kembel et al. ). In contrast, the SR index calculates the total number of taxa in a community based on their identity alone—no phylogenetic information is factored into the calculation. The differences in the PD and SR index between plant host-related factors were tested for significance using Kruskal–Wallis tests and pairwise Wilcoxon tests with Bonferroni correction. To estimate the phylogenetic distances in microbial community composition between samples, weighted UniFrac distances were calculated on rarefied bacterial and fungal ASV count tables (Lozupone et al. ). Differences in bacterial and fungal community composition between the plant-related factors were tested for significance using permutational multiple analysis of variance (PERMANOVA) on distance metrics using the adonis2 (by = ‘terms’) function in vegan R (Oksanen et al. ) and pairwiseAdonis (Martinez Arbizu ). The differences in community composition were visualized using non-metric multidimensional scaling (NMDS) ordination plots. To estimate the within-group variance amongst samples, the average distance of individual samples to the group centroid (beta dispersion) was calculated using the betadisper function in phyloseq R (McMurdie and Holmes ). Permutation tests were used to determine significant differences in the within-group variance between plant-related factors. The weighted UniFrac distances were correlated to differences in soil physicochemical properties using Mantel tests. In addition, weighted UniFrac distances were correlated to matrices of matK sequence similarity using Mantel tests. MatK similarity matrices were constructed to represent the phylogenetic relatedness between the different PS under investigation. Observations of the phylogenetic tree generated from the matK phylogeny showed sensible grouping of PS to their taxonomic positioning. Hierarchical clustering analysis was performed on the weighted UniFrac distance matrices and matK distance matrices using the complete linkage method in Stats R. Following this, dendrograms were constructed to visually compare differences in the clustering patterns of PS based on the weighted UniFrac distances of their fungal and bacterial communities versus their matK distances in ape R (Paradis and Schliep ). Variance partitioning (VP) analysis was performed in vegan R to partition the variance observed in bacterial and fungal community composition (as represented by weighted UniFrac distances) to the plant-related factors (Oksanen ). Four explanatory matrices were constructed to represent the different influencing factors. These were: PS; plant life history (PLH) (i.e. provenance + life span + functional group); plant rhizosphere traits (PRT) (i.e. mycorrhizal association + N 2 fixation); and SC. All unexplained (residual) variation from VP analysis was tentatively assigned to represent the influence of stochastic processes. Following VP analysis, distance-based redundancy analysis (db-RDA) was performed to test the significance of each explanatory matrix whilst conditioning for the other three matrices. In addition, forward stepwise selection was performed to identify the soil physicochemical properties that best accounted for the community variance that was partitioned to the influence of SC. Pairwise differences in the soil physicochemical properties selected by the forward selection model between the 37 different PS were identified using pairwise t -tests with Holm correction. The rarefied bacterial and fungal ASV tables were used as input for differential abundance analysis. First, the R package pime was used to select bacterial and fungal ASVs that best defined the microbiome of each PS (Luiz Fernando ). Prevalence intervals with an out-of-bag (OOB) error rate of 0% were selected as cut-offs. For fungal ASVs, this was a prevalence of 75%, which retained 217 ASVs and 1 168 389 sequences. For bacterial ASVs, this was a prevalence of 80%, which retained 771 ASVs and 433 795 sequences. PIME-filtered ASV count tables were used as input for differential abundance analysis using metagenomeSeq R (Paulson et al. ), where the log change estimate of each ASV between different PS was calculated using the fitLogNormal function. Significant differences in the log change estimates of ASVs between PS were determined using permutation tests ( n = 999) with correction for multiple comparisons using the Holm–Bonferroni method (holm). Heatmaps were produced using pheatmap R (Kolde and Kolde ) to display (a) the bacterial and fungal ASVs with significant log change estimates across PS and (b) the correlation shared between different PS (Pearson’s) based on the log change estimates of their bacterial and fungal ASVs.
Microbial species richness and phylogenetic diversity There were no significant differences in the SR of bacterial communities across any of the plant-related factors (Table ). However, the Faith’s diversity of bacterial communities was significantly higher in native versus exotic plants and in non-N 2 fixing versus N 2 fixing plants. For fungal communities, both Faith’s diversity and SR were significantly higher in annual versus long-lived plants (Table ). The PD of fungal communities was significantly lower in trees versus shrubs and grasses and significantly higher in non-mycorrhizal versus ectomycorrhizal plants. The mean (± SD) values for Faith's diversity and SR of fungal and bacterial across the different plant metadata factors can be seen through . Microbial beta-diversity and community composition Bacterial and fungal microbial community composition was significantly different between all plant-related factors (Table ). PS and genus were the factors that reported the highest R 2 values, thus accounting for most of the explained variation in bacterial and fungal community composition (Table ; see also and ). Although significant, the R 2 values for many of the plant-related factors representing functional plant traits (i.e. provenance, functional group, primary mycorrhizal association, life span, and N 2 fixation) were all low ( R 2 < 0.07). Bacterial and fungal communities both exhibited a heterogeneous dispersion and a high degree of within-group variability. The degree of beta-dispersion (‘ F value’) observed in bacterial communities was significantly different across the following factors: PS, plant genus, plant family, plant order, and mycorrhizal association (Table ). For fungal communities, significant beta-dispersion values were observed by plant family, plant order, plant class, provenance, and mycorrhizal association (Table ). MatK gene sequence similarity The distances in matK gene sequence similarity between the different PS did not significantly correlate to the corresponding weighted UniFrac distances for their bacterial (Mantel r = 0.134, P value = 0.075) or fungal (Mantel r = 0.040, P value = 0.306) communities. This is illustrated in , as the hierarchical clustering patterns of the different PS based on their matK gene sequences versus their bacterial and fungal community composition had little correspondence. Variance partitioning (VP) analysis VP analysis identified that, cumulatively, PS, PLH, PRT, and SC explained 34.59% of bacterial community variance and 27.27% of fungal community variance (Fig. ). Thus, both bacterial and fungal communities exhibited a high degree of residual, unexplained variation (65.41% and 72.73%, respectively). When individual explanatory matrices were tested for significance using partial db-RDA, both plant identity (Bacteria: F value = 1.40, P value < 0.001, Fungi: F value = 1.32, P value < 0.001) and SC (Bacteria: F value = 1.39, P value < 0.001, Fungi: F value = 1.41, P value < 0.001) significantly accounted for community variance. When the other explanatory matrices were conditioned out of the model, plant identity alone accounted for 9.52% of bacterial community variance and 3.65% of fungal community variance. In contrast, SC accounted for 5.67% of bacterial community variance and 9.40% of fungal community variance. PLH (Bacteria: F value = 0.00, P value > 0.05, Fungi: F value = 0.00, P value > 0.05) and PRT (Bacteria: F value = 0.00, P value > 0.05, Fungi: F value = 0.00, P value > 0.05) did not significantly account for any bacterial or fungal community variation. The composition of the root-associated soil microbiome may be indirectly influenced by plant-induced modification of the physicochemical environment. When looking at soil physicochemical properties that best accounted for bacterial community variation, forward selection models identified Olsen P ( F value = 10.76, P value < 0.05), sulphate sulphur ( F value = 2.54, P value < 0.05), and pH ( F value = 4.54, P value < 0.05) to be significant. For fungal communities, forward selection models identified Olsen P ( F value = 9.95, P value < 0.05), AMN:TN ( F value = 3.69, P value < 0.01), and volume weight ( F value = 2.85, P value < 0.05) to be significant. The values for these soil properties were variable between the 37 different PS (Fig. ). Pairwise t -tests identified that Olsen P was significantly higher ( P adjusted < 0.05) in Acaena caesiiglauca (vs. Achillea millefolium, Dactylis glomerata , and Poa colensoi ), Alnus glutinosa (vs. Ach. millefolium, D. glomerata, Holcus lanatus, Ozothamnus leptophyllus , and P. colensoi ), and Pinus radiata (vs. Ach. millefolium, D. glomerata , and Po. colensoi ). Volume weight was significantly higher in Ho. lanatus (vs. Hebe odora, O. leptophyllus, Olearia virgata , and Sophora microphylla ) and Muehlenbeckia complexa (vs. He. odora and S. microphylla ). Soil pH was significantly higher in D. glomerata and Ho. lanatus (vs. O. leptophyllus, Pi. contorta, Pi. radiata, Brachyglottis greyi, Coprosma robusta , and Ulex europaeus ), Ach. millefolium (vs. Pi. radiata ), and Po. colensoi and P. cita (vs. Pi. radiata, O. leptophyllus , and B. greyi ). Although forward selection models identified AMN:TN and sulphate sulphur to significantly influence fungal and bacterial community composition, no significant pairwise differences were determined between the 37 PS for these properties. The mean ± SD values for all soil physicochemical properties associated with each PS are presented in . Taxonomic differentiation across plant species Out of the 771 bacterial ASVs that were retained following PIME filtering and used as input for differential abundance analysis, only 10.12% (78 ASVs) were identified to be differentially abundant amongst PS ( P adjusted < 0.05). Furthermore, out of the 217 fungal ASVs retained following PIME filtering, only 16.59% (36 ASVs) were differentially abundant amongst PS ( P adjusted < 0.05). Figures and display the bacterial and fungal ASVs that had significantly different log change estimates across the PS under investigation. These results highlight that there were no large patterns of taxonomic differentiation amongst PS, that is, PS did not have markedly distinct taxonomic compositions. One exception was Agrostis capillaris (common bent or brown top grass), whose bacterial and fungal taxa were more evidently differentiated compared to the other PS. All the PS shared a significant ( P adjusted < 0.05) positive correlation based on the log change estimates of their bacterial (Pearson’s r correlation; 0.70 ± 0.08 SD) and fungal ASVs (Pearson’s r correlation; 0.69 ± 0.08 SD). These results indicate a low divergence of PS based on their root-associated soil microbiome ( ).
There were no significant differences in the SR of bacterial communities across any of the plant-related factors (Table ). However, the Faith’s diversity of bacterial communities was significantly higher in native versus exotic plants and in non-N 2 fixing versus N 2 fixing plants. For fungal communities, both Faith’s diversity and SR were significantly higher in annual versus long-lived plants (Table ). The PD of fungal communities was significantly lower in trees versus shrubs and grasses and significantly higher in non-mycorrhizal versus ectomycorrhizal plants. The mean (± SD) values for Faith's diversity and SR of fungal and bacterial across the different plant metadata factors can be seen through .
Bacterial and fungal microbial community composition was significantly different between all plant-related factors (Table ). PS and genus were the factors that reported the highest R 2 values, thus accounting for most of the explained variation in bacterial and fungal community composition (Table ; see also and ). Although significant, the R 2 values for many of the plant-related factors representing functional plant traits (i.e. provenance, functional group, primary mycorrhizal association, life span, and N 2 fixation) were all low ( R 2 < 0.07). Bacterial and fungal communities both exhibited a heterogeneous dispersion and a high degree of within-group variability. The degree of beta-dispersion (‘ F value’) observed in bacterial communities was significantly different across the following factors: PS, plant genus, plant family, plant order, and mycorrhizal association (Table ). For fungal communities, significant beta-dispersion values were observed by plant family, plant order, plant class, provenance, and mycorrhizal association (Table ).
The distances in matK gene sequence similarity between the different PS did not significantly correlate to the corresponding weighted UniFrac distances for their bacterial (Mantel r = 0.134, P value = 0.075) or fungal (Mantel r = 0.040, P value = 0.306) communities. This is illustrated in , as the hierarchical clustering patterns of the different PS based on their matK gene sequences versus their bacterial and fungal community composition had little correspondence.
VP analysis identified that, cumulatively, PS, PLH, PRT, and SC explained 34.59% of bacterial community variance and 27.27% of fungal community variance (Fig. ). Thus, both bacterial and fungal communities exhibited a high degree of residual, unexplained variation (65.41% and 72.73%, respectively). When individual explanatory matrices were tested for significance using partial db-RDA, both plant identity (Bacteria: F value = 1.40, P value < 0.001, Fungi: F value = 1.32, P value < 0.001) and SC (Bacteria: F value = 1.39, P value < 0.001, Fungi: F value = 1.41, P value < 0.001) significantly accounted for community variance. When the other explanatory matrices were conditioned out of the model, plant identity alone accounted for 9.52% of bacterial community variance and 3.65% of fungal community variance. In contrast, SC accounted for 5.67% of bacterial community variance and 9.40% of fungal community variance. PLH (Bacteria: F value = 0.00, P value > 0.05, Fungi: F value = 0.00, P value > 0.05) and PRT (Bacteria: F value = 0.00, P value > 0.05, Fungi: F value = 0.00, P value > 0.05) did not significantly account for any bacterial or fungal community variation. The composition of the root-associated soil microbiome may be indirectly influenced by plant-induced modification of the physicochemical environment. When looking at soil physicochemical properties that best accounted for bacterial community variation, forward selection models identified Olsen P ( F value = 10.76, P value < 0.05), sulphate sulphur ( F value = 2.54, P value < 0.05), and pH ( F value = 4.54, P value < 0.05) to be significant. For fungal communities, forward selection models identified Olsen P ( F value = 9.95, P value < 0.05), AMN:TN ( F value = 3.69, P value < 0.01), and volume weight ( F value = 2.85, P value < 0.05) to be significant. The values for these soil properties were variable between the 37 different PS (Fig. ). Pairwise t -tests identified that Olsen P was significantly higher ( P adjusted < 0.05) in Acaena caesiiglauca (vs. Achillea millefolium, Dactylis glomerata , and Poa colensoi ), Alnus glutinosa (vs. Ach. millefolium, D. glomerata, Holcus lanatus, Ozothamnus leptophyllus , and P. colensoi ), and Pinus radiata (vs. Ach. millefolium, D. glomerata , and Po. colensoi ). Volume weight was significantly higher in Ho. lanatus (vs. Hebe odora, O. leptophyllus, Olearia virgata , and Sophora microphylla ) and Muehlenbeckia complexa (vs. He. odora and S. microphylla ). Soil pH was significantly higher in D. glomerata and Ho. lanatus (vs. O. leptophyllus, Pi. contorta, Pi. radiata, Brachyglottis greyi, Coprosma robusta , and Ulex europaeus ), Ach. millefolium (vs. Pi. radiata ), and Po. colensoi and P. cita (vs. Pi. radiata, O. leptophyllus , and B. greyi ). Although forward selection models identified AMN:TN and sulphate sulphur to significantly influence fungal and bacterial community composition, no significant pairwise differences were determined between the 37 PS for these properties. The mean ± SD values for all soil physicochemical properties associated with each PS are presented in .
Out of the 771 bacterial ASVs that were retained following PIME filtering and used as input for differential abundance analysis, only 10.12% (78 ASVs) were identified to be differentially abundant amongst PS ( P adjusted < 0.05). Furthermore, out of the 217 fungal ASVs retained following PIME filtering, only 16.59% (36 ASVs) were differentially abundant amongst PS ( P adjusted < 0.05). Figures and display the bacterial and fungal ASVs that had significantly different log change estimates across the PS under investigation. These results highlight that there were no large patterns of taxonomic differentiation amongst PS, that is, PS did not have markedly distinct taxonomic compositions. One exception was Agrostis capillaris (common bent or brown top grass), whose bacterial and fungal taxa were more evidently differentiated compared to the other PS. All the PS shared a significant ( P adjusted < 0.05) positive correlation based on the log change estimates of their bacterial (Pearson’s r correlation; 0.70 ± 0.08 SD) and fungal ASVs (Pearson’s r correlation; 0.69 ± 0.08 SD). These results indicate a low divergence of PS based on their root-associated soil microbiome ( ).
The root-associated soil microbiome provides fundamental roles in supporting plant health, productivity, and resilience against abiotic and biotic stressors (Mendes et al. , Berendsen et al. , Penton et al. ). Thus, pinpointing how different components of the plant-root-soil interface drive microbial selection and establishment is key for us to manage plant and soil health into the future. However, identifying the primary processes that drive microbial assembly is complex and is suggested to be by the interacting influences of plant genotype, developmental stage, root exudates, root morphology, PLH, soil type, and previous soil history (Chaparro et al. , Zhao et al. , Zhou et al. , Cordovez et al. ). By controlling for the starting soil mixture and surrounding environmental conditions, our research aimed to identify how the different phylogenetic, functional, and ecological traits of PS were related to the assembly of their root-associated soil microbiome. The phylogenetic relatedness of plant hosts shared no relationship to the similarity in their root-associated soil microbiome Our research aimed to test whether the phylogenetic relatedness of PS was correlated with the phylogenetic similarity of their root-associated soil microbiome—a hypothesis that has been supported by previous research (Bouffaud et al. , Lambais et al. , Yeoh et al. , Lei et al. , Kaplan et al. , Hartman et al. ). Our results did not support this hypothesis, as the phylogenetic similarity in root-associated soil microbiomes did not correlate with the phylogenetic similarity between different PS. With the increase in phylogenetic ranking from PS to class level, we observed a consistent decline in the degree of microbial community variation that could be accounted for by plant phylogenetic origin. That is, higher phylogenetic rankings such as plant class and order only explained a small amount of compositional variation compared to PS-level identity. This suggests that, whilst PS may be used as a predictor of the root-associated soil microbiome, higher taxonomic rankings of PS cannot. Similar findings were observed by Fitzpatrick et al. ( ), who identified that although PS identity was a significant factor shaping rhizosphere assembly, the emergent structure of the rhizosphere microbiome shared no relationship with the phylogenetic relatedness between plant hosts. Plant species and plant-induced changes in soil physicochemistry were the strongest predictors of microbial assembly Although patterns in microbial assembly did not relate to the phylogenetic relationships among PS, species identity and differences in SC were the two most significant factors that accounted for bacterial and fungal community variation—a finding also observed by Burns et al. ( ). Whilst both factions were significant, PS identity accounted for a greater proportion of bacterial community variance than SC. Several studies have reported PS identity to be a significant factor in shaping microbial assembly and community structure (Garbeva et al. , Berg and Smalla , Becklin et al. , Burns et al. ). These plant-species-dependent effects on microbial assembly have been attributed to the release of carbon-rich root exudates, which selectively enrich and recruit specific root-associated soil microorganisms (Bais et al. ), with the quality and composition of root exudates varying according to PS and plant developmental stage (Badri and Vivanco , Zhalnina et al. ). For fungal communities, SC was identified to account for a higher amount of community variance than PS identity. In our experiment, all PS were planted in the same starting soil mixture. As such, these effects are not associated with differences in soil type or edaphic properties per se but are changes that the plants themselves have directly expressed on the rhizosphere and soil environment. Furthermore, plant-driven changes in the composition of root-associated microorganisms throughout the early stages of microbial assembly may also have indirectly driven the shifts observed in SC. Plants can directly modify the conditions of their surrounding soil physicochemical environment via nutrient uptake/loss or by the chemical signatures of their leaf litter, roots, and root exudates. Plants can also shape their soil physicochemical environment indirectly by driving changes in the activity and composition of their root-associated microorganisms (Rengel and Marschner , Waring et al. , Henneron et al. ). Root-associated microorganisms have key roles in the transformation and mobilization of inorganic and organic substrates into more plant-accessible soil nutrients, meaning that they can have a transformative impact on soil nutrient cycling (Finzi et al. , Dlamini et al. ; Dotaniya and Meena ). Plant-induced changes in SC provide an example of how secondary deterministic processes can indirectly shape microbial assembly. As plant roots modify the conditions of the rhizosphere and root-associated soil environment, this encourages the growth of microorganisms that can occupy the modified habitat space (Hinsinger , Liang et al. , Bell et al. , van Veelen et al. , Hernández-Cáceres et al. ). Plant-available P, such as that measured by Olsen P (bicarbonate extractable), is a key measure of soil fertility and ecosystem productivity (Vitousek et al. ). In our research, Olsen P had a particularly strong relationship with changes in root-associated soil fungal and bacterial communities. In particular, the root-associated soils from the PS Pi. radiata, Al. glutinosa , and Ac. caesiiglauca had high Olsen P values compared to the other PS. These observations may demonstrate the process of plants mobilizing soil nutrients essential for their individual growth and fitness (Will , Chen et al. , Tallec et al. , Varin et al. ) and how these are linked to changes in soil microbiology. When root-induced changes in soil chemistry influence microbial assembly, this ultimately impacts plant health and performance, and thereby success in the ecosystem. These form plant-soil feedback mechanisms that amplify over successive life cycles (van der Putten et al. , Bennett and Klironomos ) and are profoundly connected with ecosystem-level processes. The functional traits of plant species did not influence microbial community assembly In our study, plant functional traits such as life span, functional group, provenance, N 2 fixation, and mycorrhizal association were not identified as strong drivers in microbial community assembly. It is important to consider that we examined root-associated soil microbes, but not microbes that colonize and develop symbiotic relationships with plant roots such as endophytes or mycorrhizal fungi. Had we examined the assembly patterns of plant symbiotic microbes and not free-living soil microbes, we may have observed the functional traits of host plants to have had a more pronounced impact on patterns of microbial assembly. Our findings are complementary to Hartman et al. ( ), who identified that the symbiotic associations of plant hosts significantly impact the root microbiome. Unlike Hartman et al. ( ) and Bodenhausen et al. ( ), who sampled the root-associated microbiome, our study examined soil adjacent to plant roots. Thus, discrepancies between the findings of our research and Hartman et al. ( ) are due to the clearly different sampling methodologies, as we sampled soils at a greater physical distance from the root. Additionally, our research investigated root-associated soil microbial assembly following (a) a single life cycle of the plant and (b) at a single time point during the plant’s developmental stage. Thus, the absence of clear divergences in microbial assembly between plants with contrasting functional traits may be a consequence of our experiment’s relatively short duration or other factors. For example, although we studied plants with different life cycle strategies (i.e. annual vs. perennial), we did not study them over repeated life cycles, where the outcomes of their contrasting life histories may have modified their soil environment to a degree that influenced microbial assembly. Several studies have reported the soil microbiome to shift according to plant development, influenced by changes in plant root morphology and exudate release with each developmental stage (Micallef et al. , Chaparro et al. ). Furthermore, the divergence in microbial assembly between PS may amplify over successive life cycles (Cordovez et al. ). This increasing divergence is driven by plant-soil feedback mechanisms, whereby successive modifications in soil biotic and abiotic conditions by plants exert greater selection pressures on their root-associated soil microbiota (Hu et al. ). Root-associated soil microbiomes exhibited a large degree of unexplained variation Niche-based theories of microbial community assembly assert that deterministic processes govern community structure, such as adaptive species traits, biotic interactions, and environmental filtering (Dini-Andreote et al. , Zhou and Ning ). As discussed, aside from plant identity and plant-induced changes in SC, the functional plant traits measured in our study accounted for very little of the variation observed in root-associated soil microbial communities. Our results identified that a large amount of the compositional variation in root-associated soil communities remained unexplained, with over 73% of fungal community variation and 65% of bacterial community variation unaccounted. Given the breadth of variables we assessed, much of this variation may represent elements of stochastic processes driving random community assemblage. More recently, there has been a growing body of literature recognizing the degree to which stochastic processes may govern the resulting structure of microbial communities (Caruso et al. , Zhang et al. , Zhou and Ning , Chen et al. , Hou et al. , Huang et al. ). The PS under investigation in this study were at relatively early stages of succession and growth (plants were grown for 10 months), which may explain the large amount of unexplained compositional variation we observed. Stochastic processes are reported to dominate microbial assembly during the early stages of community establishment, as the roots of plant seedlings release an abundant supply of exudates, which reduces competitive biotic interactions (Dini-Andreote et al. ). However, throughout community development, microbiomes transition from random community assembly to more highly structured, niche-differentiated assemblages because of functional adaptations to the environmental selection pressures (Aguilar and Sommaruga , Hu et al. ). As plants develop, they alter the bioavailability of resources according to their needs; thus, deterministic processes increasingly dominate microbial community assembly as the surrounding environment is increasingly modified by plant growth (Dini-Andreote et al. ). The modification of soil physicochemical properties by PS was observed for several of the PS in our study, with Pi. radiata, Ac. caesiiglauca , and Al. glutinosa driving changes in Olsen P, for example. It is possible that if the microbial communities were measured over longer periods, community assembly would be more evidently niche-differentiated as each plant exerted unique selection pressures within the root-associated soil environment.
Our research aimed to test whether the phylogenetic relatedness of PS was correlated with the phylogenetic similarity of their root-associated soil microbiome—a hypothesis that has been supported by previous research (Bouffaud et al. , Lambais et al. , Yeoh et al. , Lei et al. , Kaplan et al. , Hartman et al. ). Our results did not support this hypothesis, as the phylogenetic similarity in root-associated soil microbiomes did not correlate with the phylogenetic similarity between different PS. With the increase in phylogenetic ranking from PS to class level, we observed a consistent decline in the degree of microbial community variation that could be accounted for by plant phylogenetic origin. That is, higher phylogenetic rankings such as plant class and order only explained a small amount of compositional variation compared to PS-level identity. This suggests that, whilst PS may be used as a predictor of the root-associated soil microbiome, higher taxonomic rankings of PS cannot. Similar findings were observed by Fitzpatrick et al. ( ), who identified that although PS identity was a significant factor shaping rhizosphere assembly, the emergent structure of the rhizosphere microbiome shared no relationship with the phylogenetic relatedness between plant hosts.
Although patterns in microbial assembly did not relate to the phylogenetic relationships among PS, species identity and differences in SC were the two most significant factors that accounted for bacterial and fungal community variation—a finding also observed by Burns et al. ( ). Whilst both factions were significant, PS identity accounted for a greater proportion of bacterial community variance than SC. Several studies have reported PS identity to be a significant factor in shaping microbial assembly and community structure (Garbeva et al. , Berg and Smalla , Becklin et al. , Burns et al. ). These plant-species-dependent effects on microbial assembly have been attributed to the release of carbon-rich root exudates, which selectively enrich and recruit specific root-associated soil microorganisms (Bais et al. ), with the quality and composition of root exudates varying according to PS and plant developmental stage (Badri and Vivanco , Zhalnina et al. ). For fungal communities, SC was identified to account for a higher amount of community variance than PS identity. In our experiment, all PS were planted in the same starting soil mixture. As such, these effects are not associated with differences in soil type or edaphic properties per se but are changes that the plants themselves have directly expressed on the rhizosphere and soil environment. Furthermore, plant-driven changes in the composition of root-associated microorganisms throughout the early stages of microbial assembly may also have indirectly driven the shifts observed in SC. Plants can directly modify the conditions of their surrounding soil physicochemical environment via nutrient uptake/loss or by the chemical signatures of their leaf litter, roots, and root exudates. Plants can also shape their soil physicochemical environment indirectly by driving changes in the activity and composition of their root-associated microorganisms (Rengel and Marschner , Waring et al. , Henneron et al. ). Root-associated microorganisms have key roles in the transformation and mobilization of inorganic and organic substrates into more plant-accessible soil nutrients, meaning that they can have a transformative impact on soil nutrient cycling (Finzi et al. , Dlamini et al. ; Dotaniya and Meena ). Plant-induced changes in SC provide an example of how secondary deterministic processes can indirectly shape microbial assembly. As plant roots modify the conditions of the rhizosphere and root-associated soil environment, this encourages the growth of microorganisms that can occupy the modified habitat space (Hinsinger , Liang et al. , Bell et al. , van Veelen et al. , Hernández-Cáceres et al. ). Plant-available P, such as that measured by Olsen P (bicarbonate extractable), is a key measure of soil fertility and ecosystem productivity (Vitousek et al. ). In our research, Olsen P had a particularly strong relationship with changes in root-associated soil fungal and bacterial communities. In particular, the root-associated soils from the PS Pi. radiata, Al. glutinosa , and Ac. caesiiglauca had high Olsen P values compared to the other PS. These observations may demonstrate the process of plants mobilizing soil nutrients essential for their individual growth and fitness (Will , Chen et al. , Tallec et al. , Varin et al. ) and how these are linked to changes in soil microbiology. When root-induced changes in soil chemistry influence microbial assembly, this ultimately impacts plant health and performance, and thereby success in the ecosystem. These form plant-soil feedback mechanisms that amplify over successive life cycles (van der Putten et al. , Bennett and Klironomos ) and are profoundly connected with ecosystem-level processes.
In our study, plant functional traits such as life span, functional group, provenance, N 2 fixation, and mycorrhizal association were not identified as strong drivers in microbial community assembly. It is important to consider that we examined root-associated soil microbes, but not microbes that colonize and develop symbiotic relationships with plant roots such as endophytes or mycorrhizal fungi. Had we examined the assembly patterns of plant symbiotic microbes and not free-living soil microbes, we may have observed the functional traits of host plants to have had a more pronounced impact on patterns of microbial assembly. Our findings are complementary to Hartman et al. ( ), who identified that the symbiotic associations of plant hosts significantly impact the root microbiome. Unlike Hartman et al. ( ) and Bodenhausen et al. ( ), who sampled the root-associated microbiome, our study examined soil adjacent to plant roots. Thus, discrepancies between the findings of our research and Hartman et al. ( ) are due to the clearly different sampling methodologies, as we sampled soils at a greater physical distance from the root. Additionally, our research investigated root-associated soil microbial assembly following (a) a single life cycle of the plant and (b) at a single time point during the plant’s developmental stage. Thus, the absence of clear divergences in microbial assembly between plants with contrasting functional traits may be a consequence of our experiment’s relatively short duration or other factors. For example, although we studied plants with different life cycle strategies (i.e. annual vs. perennial), we did not study them over repeated life cycles, where the outcomes of their contrasting life histories may have modified their soil environment to a degree that influenced microbial assembly. Several studies have reported the soil microbiome to shift according to plant development, influenced by changes in plant root morphology and exudate release with each developmental stage (Micallef et al. , Chaparro et al. ). Furthermore, the divergence in microbial assembly between PS may amplify over successive life cycles (Cordovez et al. ). This increasing divergence is driven by plant-soil feedback mechanisms, whereby successive modifications in soil biotic and abiotic conditions by plants exert greater selection pressures on their root-associated soil microbiota (Hu et al. ).
Niche-based theories of microbial community assembly assert that deterministic processes govern community structure, such as adaptive species traits, biotic interactions, and environmental filtering (Dini-Andreote et al. , Zhou and Ning ). As discussed, aside from plant identity and plant-induced changes in SC, the functional plant traits measured in our study accounted for very little of the variation observed in root-associated soil microbial communities. Our results identified that a large amount of the compositional variation in root-associated soil communities remained unexplained, with over 73% of fungal community variation and 65% of bacterial community variation unaccounted. Given the breadth of variables we assessed, much of this variation may represent elements of stochastic processes driving random community assemblage. More recently, there has been a growing body of literature recognizing the degree to which stochastic processes may govern the resulting structure of microbial communities (Caruso et al. , Zhang et al. , Zhou and Ning , Chen et al. , Hou et al. , Huang et al. ). The PS under investigation in this study were at relatively early stages of succession and growth (plants were grown for 10 months), which may explain the large amount of unexplained compositional variation we observed. Stochastic processes are reported to dominate microbial assembly during the early stages of community establishment, as the roots of plant seedlings release an abundant supply of exudates, which reduces competitive biotic interactions (Dini-Andreote et al. ). However, throughout community development, microbiomes transition from random community assembly to more highly structured, niche-differentiated assemblages because of functional adaptations to the environmental selection pressures (Aguilar and Sommaruga , Hu et al. ). As plants develop, they alter the bioavailability of resources according to their needs; thus, deterministic processes increasingly dominate microbial community assembly as the surrounding environment is increasingly modified by plant growth (Dini-Andreote et al. ). The modification of soil physicochemical properties by PS was observed for several of the PS in our study, with Pi. radiata, Ac. caesiiglauca , and Al. glutinosa driving changes in Olsen P, for example. It is possible that if the microbial communities were measured over longer periods, community assembly would be more evidently niche-differentiated as each plant exerted unique selection pressures within the root-associated soil environment.
Our research findings identified that during the early stages of plant growth and establishment, PS identity and plant-induced changes in SC were the most significant factors that shaped root-associated soil microbial assembly. The functional traits of the PS under investigation, such as their life span, provenance, growth form, and mycorrhizal associations, did not significantly account for any of the structural variation observed in bacterial or fungal communities between plants. Although PS identity was determined to be a significant factor driving microbial assembly, the phylogenetic relationships shared between the 37 PS under investigation shared no relationship to the similarity of their root-associated soil microbiomes. Thus, our findings reject the hypothesis that plant phylogenetic relatedness can be used to predict the emergent structure of the root-associated soil microbiome.
fiad126_Supplemental_File Click here for additional data file.
|
Microfluidics-based strategies for molecular diagnostics of infectious diseases | e5f95883-cf5f-4dfd-b42a-5e2d42bbc0ee | 8930194 | Pathology[mh] | Infectious diseases arise from pathogens, including bacteria, viruses, and parasites, with a global distribution. Unlike other diseases, pathogens rapidly infect and are transmitted between human and animal carriers through inoculation, air, and water media . It is essential to prevent infectious diseases as a public health measure. There are three fundamental strategies for managing infectious diseases: (1) controlling the source of infection; (2) blocking transmission pathways; and (3) protecting susceptible populations. Among the fundamental strategies, control of the infectious source is considered the most crucial strategy because of convenience and low cost. Prompt diagnosis, isolation, and treatment of infected persons are essential, which require rapid, sensitive, and accurate diagnostic strategies . The current diagnosis of infectious diseases usually combines clinical examinations based on signs and symptoms and laboratory tests, such as cell culture and molecular diagnostics, which require well-trained personnel, time-consuming procedures, and expensive testing equipment . Prevention of infectious disease outbreaks calls for rapid, low-cost, accurate, and on-site diagnosis, particularly in resource-poor areas where infectious diseases are usually prevalent and severe , as is treatment in the wilderness or battlefield where emergencies unpredictably occur, but medical assistance is limited . In such cases, microfluidics, a technology that combines micro-electro-mechanical system technology, nanotechnology, or materials science for precise fluid manipulations , offers a new opportunity for point-of-care testing (POCT) of infectious pathogens outside of hospitals and laboratories. Microfluidic technology enables a sample- and cost-saving route for molecular diagnostics during disease outbreaks compared with traditional laborious diagnostics. The worldwide spread of corona virus disease 2019 (COVID-19) was caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2); as a result, the importance of microfluidics for timely prevention and control of the pandemic has again been emphasized . Compared with traditional diagnostics, microfluidic POCT utilizes miniaturized and portable devices, ranging from benchtop analyzers to small lateral flow strips, that conduct tests nearby the sampling sites . These tests are advanced for simplified or omitted sample preparation, rapid signal amplification, and sensitive signal readout, leading to a short duration and accurate results within minutes. The availability and massive production of microfluidics-based point-of-care tools have expanded their applications for cost-effective and straightforward diagnosis outside the hospital, near the patient, or even at home. Among the existing strategies for diagnosing infectious diseases, molecular diagnostics are among the most sensitive methods . Moreover, molecular diagnostics usually serve as the gold standard method for ongoing COVID-19 detection, allowing direct detection of virus-specific RNA or DNA regions prior to onset of the immune response . In the current review, we present the latest advances in microfluidics-based processes for molecular diagnostics of infectious diseases, from an academic perspective to future industrial outlook (Fig. ). We start with the three steps critical for nucleic acid testing: on-chip sample pre-processing; nucleic acid amplification; and signal read-out. We then compared various types of microfluidic platforms with their structures and functions, which showed unique features (both pros and cons). The digital nucleic acid assay is further discussed and exemplified as the third-generation technology for the absolute quantification of infectious pathogen molecules. Additionally, several typical and latest commercial POCT devices will be introduced, which display the current state of the microfluidic POCT market for molecular diagnostics. Our outlooks towards future applications will also be discussed and explained. Based on the implemented functions, the modules of a microfluidic chip for nucleic acid testing can be divided into three categories (sampling, sensing, and signaling) . Among these modules, the sampling module mainly realizes sample lysis and nucleic acid extraction. The sensing module primarily operates the conversion and amplification of nucleic acid signals. The signaling module achieves detection of the signal after conversion and processing by the sensing module. We will summarize different chips that can achieve the “sample in and answer out” function according to the on-chip nucleic acid testing procedure. Sampling module: lyse the original samples and extract nucleic acids The foremost step of nucleic acid testing is nucleic acid extraction, which refers to the isolation of targeted nucleic acid from the original samples. Nucleic acid extraction is performed to purify nucleic acids from other molecular pollutants, ensure the integrity of the primary structure of nucleic acid molecules, and to optimize outturns. Nucleic acid extraction requires essential sample lysis and nucleic acid capture, the quality and efficiency of which have a huge impact on the research and diagnosis results. Any subtle adverse effects during extraction limit downstream detections. For example, polymerase chain reaction (PCR) and loop-mediated isothermal amplification (LAMP) approaches are inhibited by some residual organic solvents in nucleic acid extraction reagents such as ethanol and isopropanol . Liquid–liquid extraction and solid-phase extraction are among the most popular modes of nucleic acid extraction ; however, liquid–liquid extraction on the chips is extremely limited because the reagents used in liquid–liquid extraction are corrosive to most microfluidic chips. Herein we emphasize solid-phase extraction methods based on microchips and compare the strengths and weaknesses. Silicon-based strategies Silicon is a compatible substrate material for nucleic acids because silicon is biocompatible, stable, and has easily modifiable properties . Importantly, when modified by silica or other materials, this composite exhibits the characteristic of adsorbing negatively-charged nucleic acids in low pH and hypersaline conditions, while eluting with high pH and low-salt solutions. Based on this phenomenon, nucleic acids can be purified. Silicon-based materials of various forms have been exploited for nucleic acid extraction in microfluidics, such as silica beads, powder, microfiber filters, and silica gel membranes . Depending on the material properties, silicon-based materials can be utilized in various ways on microchips. For example, silica beads, powders, and commercial nanofilters can be simply placed into the wells or microchannels of the microfluidic chip and assist the extraction of nucleic acids from samples . Surface-modified silica gel membranes can also be used to rapidly purify DNA from pathogens at low cost. For example, Wang et al. introduced a universal and portable system by combining a denaturation bubble-mediated strand exchange amplification reaction with chitooligosaccharide-coated silica gel membranes through which 10 2 –10 8 colony-forming units (CFU)/ml of Vibrio parahaemolyticus were successfully detected, and the existence of the virus was easily visualized. Powell et al. then used the silicon-based microchip to detect hepatitis C virus (HCV), human immunodeficiency virus (HIV), Zika virus, and human papilloma virus multiply and automatically, in which 1.3 µl of meandering microreactors were designed to capture RNA of viruses and perform in situ amplification. In addition to these methods, surface-modified silicon micropillars play a key role in nucleic acid extraction because the geometrical dimension and modifying material properties significantly improve extraction efficiency. Chen et al. proposed a microfluidic platform to extract RNA at low concentrations based on amino-coated silicon micropillars. The microfluidic device integrates micro-pillar arrays within an area of 0.25 cm 2 on the silicon substrate to substantiate a higher extraction efficiency with high surface-to-volume ratio designs. As a benefit from this design, the microfluidic device achieves up to 95% nucleic acid extraction efficiency. These silicon-based strategies demonstrated the value of rapid isolation nucleic acids at low cost. When combined with microfluidic chips, silicon-based extraction strategies not only improve the efficiency of nucleic acid testing, but also facilitate miniaturization and integration of analytical devices . Magnetic-based strategies The magnetic-based isolation approach exploits magnetic particles to extract nucleic acids at the circumstance of external magnetic fields. The commonly utilized magnetic particles include silica-coated, amino-coated, and carboxyl-coated Fe 3 O 4 or γ-Fe 2 O 3 magnetic particles . Compared with silicon-based, solid-phase extraction techniques, a distinct feature of the magnetic particles is ease of manipulation and control using an external magnet. Utilizing the electrostatic interactions between nucleic acids and silica, nucleic acids are adsorbed to the surface of silica-encapsulated magnetic particles under hypersaline and low pH conditions, while the molecules can be eluted again under hyposaline and high pH conditions. The silica-coated magnetic beads allow for DNA extraction from large-volume samples (400 μl) with the help of magnet-guided movement . As a demonstration, Rodriguez-Mateos et al. used a tunable magnet to manipulate the transfer of magnetic beads in different chambers. Based on silica-coated magnetic particles, 470 copies/ml of genomic SARS-CoV-2 RNA can be extracted from wastewater samples for reverse-transcription LAMP (RT-LAMP) detection, and the answer can be read out within 1 h by the unaided eye (Fig. a). The positively-charged magnetic particles are ideal for the nucleic acid phosphate backbone to attach. At a specific salt concentration, the negatively-charged nucleic acid phosphate groups can be absorbed to the surface of magnetic composite particles by positive charges. Thus, the magnetic nanoparticle with a rough surface and a high density of amino groups has been developed for nucleic acid extraction. After magnetic separation and blocking, the magnetic nanoparticles and DNA complexes can be used directly for PCR, omitting complex and time-consuming purification and elution operations . The negative carboxyl-coated magnetic nanoparticles are also made to isolate nucleic acids, which are adsorbed to the surface in high concentrations of polyethylene glycol and sodium chloride solutions . Utilizing these surface-modified magnetic beads, DNA extraction is compatible with downstream amplification. Dignan et al. described an automatic and portable centrifugal microfluidic platform for nucleic acid pre-processing that allows in situ use by non-technical personnel. Moreover, the compatibility of the extracted DNA with LAMP, a technique ideal for point-of-care nucleic acid analysis, was further demonstrated for minimal hardware requirements and adaptability with a colorimetric assay (Fig. b). The magnetic bead methods provide the possibility for automated extraction, of which some commercial automatic nucleic acid extractors exist [KingFisher; ThermoFisher (Waltham, MA, U.S.), QIAcube ® HT; CapitalBio (Beijing, China), and Biomek ® ; Beckman (Miami, FL, U.S.)]. The advantages of magnetic beads in combination with microfluidics for automated nucleic acid extraction with high efficiency have the potential to facilitate the growth of molecular diagnostics; however, magnetic beads in combination with microfluidics are still largely dependent on complex control systems to precisely manipulate magnetic beads, which explains why prevailing commercial products are bulky and expensive, restricting the further application of magnetic beads in POCT. Porous materials-based strategies Several porous materials, such as modified nitrocellulose filter, Finders Technology Associates (FTA) cards, polyethersulfone-based filter paper, and glycan-coated materials, have also been utilized for nucleic acid detection . Porous fibrous materials, such as fibrous papers, are first used for DNA extraction utilizing the physical entanglement of long-chain DNA molecules with the fiber. Small pores lead to strong physical constraints on DNA molecules, which has a positive effect on DNA extraction. The extraction efficiency does not satisfy the need for DNA amplification due to the varying sizes of pores of the fibrous paper . The FTA card, a commercial filter paper used in the forensic field, has been widely applied to other molecular diagnostics. By using cellulose filter paper impregnated with various chemicals to help lyse cellular membranes from samples, the released DNA can be protected from degradation for up to 2 years. More recently, impregnated cellulose paper has been developed for molecular testing of various pathogens, including SARS-CoV-2, leishmaniasis, and malaria . The HIV in separated plasma is directly lysed, and viral nucleic acids are enriched by an integrated, flow-through FTA ® membrane in the concentrator, which enables nucleic acid preparation with high efficiency (Fig. c). The main challenge for nucleic acid testing using FTA cards is that the chemicals, such as guanidine and isopropanol, will inhibit subsequent amplification reactions. To solve the problem, chitosan-modified Fusion 5 filter paper was developed for high-efficiency nucleic acid extraction by combining the strengths of both leveraging the physical entanglement of DNA molecules with the fiber filter paper and the electrostatic adsorption of DNA to the chitosan-modified filter fibers (Fig. d). Similarly, Zhu et al. demonstrated a chitosan-modified capillary assist, a microfluidic-based in situ PCR method, to rapidly extract and detect Zika virus RNA. Based on the features of the chitosan with pH-responsive “on and off” switches, nucleic acids can be adsorbed/desorbed in a lysate/PCR mixture environment, respectively. As described, these strategies incorporate the strengths of different solid-phase materials and increase the performance of nucleic acid extraction in microfluidics. In practical applications, extensive use of these materials is not economical, while using the materials for proper processing or surface modification of common materials can also maintain their functions. Thus, it is believed that cost can be decreased by implementing these strategies after pilot studies. Sensing module: convert and amplify nucleic acid signals Nucleic acid testing on microfluidic platforms often uses small sample volumes (< 100 µl), therefore requires amplification of the target nucleic acids with specific probes for conversion to a signal that is convenient for downstream detection (optical, electrical, and magnetic) . Nucleic acid amplification in microfluidics can also speed up the reaction, optimize the limit of detection, lower the sample demand, and increase the detection accuracy . Recently, with the achievement of fast and accurate detection, various nucleic acid amplification methods, including PCR and some isothermal amplification reactions, have been applied in microfluidics. This section will summarize those promising techniques based on microfluidic systems for nucleic acid testing. PCR PCR is a simulation of the DNA replication procedure from organisms, the theory of which is detailed elsewhere and thus will not be discussed herein. PCR can amplify very few target DNA/RNA at an exponential rate, thus making PCR a powerful tool to detect nucleic acids rapidly. In recent decades, many portable microfluidic devices equipped with thermal circulation systems to perform PCR have been developed to satisfy the needs of point-of-care diagnosis . According to different temperature control methods, on-chip PCR can be divided into four types (traditional, continuous-flow, spatially-switched, and convective PCR) . For example, Ji et al. established the direct reverse-transcription quantitative PCR (RT-qPCR) assay on a self-designed microfluidic platform to multiply detect SARS-CoV-2, and influenza A and B viruses in pharyngeal swab samples (Fig. a). Park et al. established a simple pathogen analytic chip by integrating the film-based PCR, electrode, and polydimethylsiloxane-based finger-actuated microfluidic modules. Nevertheless, both works exemplify the common disadvantage of traditional PCR. Thermal cycling is necessary for PCR, which restricts the further miniaturization for the device and shorter testing time. The development of microfluidics-based continuous flow and spatially-switched PCR is essential to solve this problem. Utilizing a long serpentine channel or short straight channel, continuous flow PCR can achieve rapid amplification by actively pushing reagents with a pump outside of chips to three pre-heated zones in sequence and circularly. The operation successfully avoids the transition stage between different reaction temperatures, which significantly reduces the testing time (Fig. b). In another study, Jung et al. proposed a novel Rotary PCR Genetic Analyzer to perform the ultrafast and multiple reverse-transcription PCR in combination with the features of the stationary and flow-through PCR (Fig. c). The PCR microchip will rotate through three thermal blocks with different temperatures for nucleic acid amplification, as follows: I. block at 94 °C for denaturation; II. block at 58 °C for annealing; and III. block at 72 °C for the extension. Through capillary tubes and loops, or even thin disks, convective PCR can rapidly amplify nucleic acids with naturally induced free thermal convection without an external pump. For instance, a cycle olefin polymer microfluidic platform was developed on a fabricated rotating heater stage utilizing a centrifugation-assisted thermal cycle in a ring-structured microchannel for PCR (Fig. d). The reaction solution is driven by thermal convection and continuously exchanged high/low temperatures in the ring-structured microchannel. The whole amplification process can be finished in 10 min and the limit of detection goes to 70.5 pg/channel. As expected, rapid PCR is a powerful tool for both fully-integrated “sample-to-answer” molecular diagnostic systems and multiplex analysis systems. With rapid PCR, the time spent on detecting SARS-CoV-2 is significantly decreased, which helps to control the COVID-19 pandemic efficiently. Isothermal amplification A complex thermocycler is required for PCR, which is inappropriate for POCT. Recently, isothermal amplification methods have been applied to microfluidics, including but not limited to LAMP, recombinase polymerase amplification (RPA), and nucleic acid sequence-based amplification . With these technologies, nucleic acids are amplified at a constant temperature, thus promoting portable POCT devices for molecular diagnostics with low cost and high sensitivity. High-throughput microfluidics-based LAMP analysis enables multiplex detection of infectious diseases . In combination with centrifugal microfluidic systems, LAMP can further promote the automation of nucleic acid detection . A rotate and react SlipChip was developed to visually detect multiple bacteria in parallel by LAMP (Fig. a). With optimized LAMP in the assay, the fluorescent signal-to-noise ratio is approximately fivefold, and the limit of detection reached 7.2 copies/μl genomic DNA. Moreover, the existence of five common digestive bacterial pathogens, including Bacillus cereus , Escherichia coli , Salmonella enterica , Vibrio fluvialis and Vibrio parahaemolyticus , were visualized based on the method in < 60 min. The advantages of LAMP in microfluidics include, but are not limited to rapid reaction and miniaturized detection. Yet, due to the reaction temperature during LAMP (approximately 70 °C), aerosols are inevitably produced, which results in a high rate of false-positive results. Detection specificity, primer design, and temperature control also need to be optimized for LAMP. Moreover, chip designs that implement multiple target detection on one chip are of significant value and should be developed. Furthermore, LAMP is suitable for multiple target detection integrated into one chip, which is of great significance, but still has a large room for growth. RPA can partially reduce the high false-positive rates of LAMP because the relatively low reaction temperature (approximately 37 °C) causes a relatively small evaporation problem . In the RPA system, two opposing primers initiate the DNA synthesis by combining with the recombinant enzymes and the amplification can be completed within 10 min . Therefore, the entire process of RPA is much faster than PCR or LAMP. Microfluidic technology has been demonstrated to further improve the velocity and accuracy of RPA in recent years . For example, Liu et al. developed a microfluidic-integrated lateral flow recombinase polymerase amplification assay to rapidly and sensitively detect SARS-CoV-2, integrating the reverse-transcription RPA (RT-RPA) and a universal lateral flow dipstick detection system into a single microfluidic system (Fig. b). The assay can be finished in approximately 30 min with a 1 copy/μl or 30 copies/sample limit of detection. A wearable microfluidic device was developed by Kong et al. for rapid and straightforward detection of HIV-1 DNA through RPA utilizing body temperature and a cellphone-based fluorescence detection system (Fig. c). The wearable RPA testing can detect target sequences at 100 copies/ml within 24 min, showing great potential for rapid diagnosis of HIV-1-infected infants in resource-limited areas. RPA based on microfluidics has witnessed rapid advances; however, the cost from chip fabrication and reaction consumption is too high and is supposed to be lowered to increase the accessibility of the technique. In addition, the high sensitivity of RPA may influence the amplification of non-specific products, especially when contamination exists. These limitations may affect the application of RPA in microfluidic systems and deserve further optimization. Well-designed primers and probes for different targets are also required to increase the feasibility of RPA-based microfluidic strategies in POCT. Clustered regularly interspaced short palindromic repeats (CRISPR)-based methods for nucleic acid testing Cas13 and Cas12a have the ability to cut nucleic acids indiscriminately, and thus can be developed as detection and diagnostic tools. Cas13 and Cas12a are activated when binding the target DNA or RNA, respectively. Once activated, the proteins then start to cut other nucleic acids nearby, after which the guide RNA that targets pathogen-specific nucleic acids can cut off a quenched fluorescent probe and unleash fluorescence. Based on the theory, Kellner et al. developed a Cas13-based method [Specific High-sensitivity Enzymatic Reporter UnLOCKING (SHERLOCK)], while Broughton et al. developed another Cas12a-based method [DNA Endonuclease Targeted CRISPR Trans Reporter (DETECR)]. In recent years, various CRISPR-based nucleic acid assays have emerged . Traditional CRISPR-based methods are usually time-consuming and labor-intensive because of multiple procedures encompassing nucleic acid extraction, amplification, and CRISPR detection. The likelihood of false-positive results may be increased for exposing liquid to air. Given the above, the CRISPR-based systems are in urgent need of optimization. A pneumatically-controlled microfluidic platform that can run 24 assays in parallel was designed for CRISPR-Cas12a and CRISPR-Cas13a detection applications . The system is equipped with a fluorescence detection device, thus can automatically detect femtomolar DNA and RNA samples bypassing nucleic acid amplification. Chen et al. integrated recombinase-aided amplification with CRISPR-Cas12a system in centrifugal microfluidics (Fig. a). This work overcomes the difficulty in integrating these two processes because Cas12a can digest the template DNA and inhibit the amplification process. In addition, Chen et al. further pre-stored reaction reagents into centrifugal microfluidics to complete the whole process automatically. In another work, Silva et al. developed an amplification-free CRISPR/Cas12a- and smartphone-based diagnostic method to detect SARS-CoV-2 (Fig. b). This assay is referred to as a cellphone-based amplification-free system with CRISPR/Cas-dependent enzyme, relying on smartphone imaging of a catalase-generated gas bubble signal in a microfluidic channel. Less than 50 copies/µl nucleic acids can be sensitively detected without pre-amplification and the full process from sample inlet to signal readout takes only 71 min. Signaling module: detect signals processed by sensing module As the final step of the nucleic acid testing, signal detection directly reflects the diagnostic result and is determinative in developing efficient, sensitive, and accurate POCT. Signals can be read out through various methods, such as fluorescence-based, electrochemical, colorimetric, and magnetic-based strategies. In this section, we will introduce the principle of each approach and make a comparison for infectious disease molecular diagnostics in microfluidics. Fluorescence-based strategies are extensively applied to POCT to diagnose infectious diseases owing to their significant benefits of superior sensitivity, low cost, easy operation, and instant analysis . These strategies make use of labeling fluorophores, such as fluorescent dyes and nanomaterials, to produce detectable signals (fluorescence enhancement or quenching). This finding suggests that fluorescence-based strategies can be categorized into direct fluorescence labeling, “signal-on” and “signal-off” fluorescence detection . Direct fluorescence labeling detection makes use of special fluorescent tags labeling specific ligands to generate a certain amount of fluorescence when selectively binding to the targets. For “signal-on” fluorescence detection, the quality of the fluorescence signal is positively correlated with the target quantity. The fluorescence intensity is insignificant in the absence of the target and detectable as the target is sufficient. Conversely, the fluorescence intensity of the “signal-off” fluorescence detection is negatively correlated with the target quantity, which is initially at a maximum and decreasing, while the target is enhancing. For example, by utilizing the target-dependent trans-cleavage mechanism of CRISPR-Cas13a, Tian et al. developed a novel sensing strategy to detect RNA directly bypassing reverse transcription (Fig. a). Binding to a complementary target RNA, the CRISPR–Cas13–RNA complex can be activated, triggering collateral cleavage of a non-specific RNA reporter in trans. The fluorescently-labeled reporter [fluorophore (F)] is quenched by a quencher (Q) when intact and generates fluorescence when cleaved by the activated complex. Electrochemical detection has advantages, such as rapid detection, easy fabrication, low cost, portability, and self-control, making it a powerful analytical method for POCT applications. Based on a graphene field-effect transistor, Gao et al. developed a nano-biosensor to multiply detect antigens of Lyme disease from Borrelia burgdorferi bacteria, exhibiting a 2 pg/ml limit of detection (Fig. b). Colorimetric assays have been applied for POCT applications, benefitting from the dominance of portability, low cost, ease of preparation, and naked eye readout. Colorimetric detection can convert the information of target nucleic acid existence to visible color change utilizing oxidation of peroxidase or peroxidase-like nanomaterials, aggregation of nanomaterials, and addition of dye indicators . Notably, gold nanoparticles are broadly applied in colorimetric strategy establishment and have attached increasing interest to develop colorimetric POCT platforms for on-site infectious disease diagnostics because of the ability to cause fast and significant color changes . Utilizing an integrated centrifugal microfluidic device , foodborne pathogens within a contaminated milk sample can be automatically detected down to 10 bacterial cell levels, the outcome of which can be read out by the unaided eye in 65 min (Fig. c). Magnetic-based sensing methods can sensitively detect analytes by employing the magnetic materials, and have obtained a surging interest for POCT applications in recent decades. Magnetic-based sensing methods have some unique advantages, such as low-cost magnetic materials rather than expensive optics components. Even so, the detection efficiency is improved and the sample preparation time is decreased utilizing magnetic fields . Moreover, magnetic-based sensing results exhibit great specificity, sensitivity, and high signal-to-noise ratio because of the insignificant magnetic background signal of biological samples . A magnetic tunneling junction-based biosensor was integrated onto a portable microchip platform by Sharma et al. for the multiplex detection of pathogens (Fig. d). The biosensor sensitively detects extracted nucleic acids below the nanomole range from pathogens. Despite the outstanding performance of the detection methods mentioned above, drawbacks still exist. These methods were compared (Table ), including some applications with detailed information (both advantages and disadvantages). The foremost step of nucleic acid testing is nucleic acid extraction, which refers to the isolation of targeted nucleic acid from the original samples. Nucleic acid extraction is performed to purify nucleic acids from other molecular pollutants, ensure the integrity of the primary structure of nucleic acid molecules, and to optimize outturns. Nucleic acid extraction requires essential sample lysis and nucleic acid capture, the quality and efficiency of which have a huge impact on the research and diagnosis results. Any subtle adverse effects during extraction limit downstream detections. For example, polymerase chain reaction (PCR) and loop-mediated isothermal amplification (LAMP) approaches are inhibited by some residual organic solvents in nucleic acid extraction reagents such as ethanol and isopropanol . Liquid–liquid extraction and solid-phase extraction are among the most popular modes of nucleic acid extraction ; however, liquid–liquid extraction on the chips is extremely limited because the reagents used in liquid–liquid extraction are corrosive to most microfluidic chips. Herein we emphasize solid-phase extraction methods based on microchips and compare the strengths and weaknesses. Silicon-based strategies Silicon is a compatible substrate material for nucleic acids because silicon is biocompatible, stable, and has easily modifiable properties . Importantly, when modified by silica or other materials, this composite exhibits the characteristic of adsorbing negatively-charged nucleic acids in low pH and hypersaline conditions, while eluting with high pH and low-salt solutions. Based on this phenomenon, nucleic acids can be purified. Silicon-based materials of various forms have been exploited for nucleic acid extraction in microfluidics, such as silica beads, powder, microfiber filters, and silica gel membranes . Depending on the material properties, silicon-based materials can be utilized in various ways on microchips. For example, silica beads, powders, and commercial nanofilters can be simply placed into the wells or microchannels of the microfluidic chip and assist the extraction of nucleic acids from samples . Surface-modified silica gel membranes can also be used to rapidly purify DNA from pathogens at low cost. For example, Wang et al. introduced a universal and portable system by combining a denaturation bubble-mediated strand exchange amplification reaction with chitooligosaccharide-coated silica gel membranes through which 10 2 –10 8 colony-forming units (CFU)/ml of Vibrio parahaemolyticus were successfully detected, and the existence of the virus was easily visualized. Powell et al. then used the silicon-based microchip to detect hepatitis C virus (HCV), human immunodeficiency virus (HIV), Zika virus, and human papilloma virus multiply and automatically, in which 1.3 µl of meandering microreactors were designed to capture RNA of viruses and perform in situ amplification. In addition to these methods, surface-modified silicon micropillars play a key role in nucleic acid extraction because the geometrical dimension and modifying material properties significantly improve extraction efficiency. Chen et al. proposed a microfluidic platform to extract RNA at low concentrations based on amino-coated silicon micropillars. The microfluidic device integrates micro-pillar arrays within an area of 0.25 cm 2 on the silicon substrate to substantiate a higher extraction efficiency with high surface-to-volume ratio designs. As a benefit from this design, the microfluidic device achieves up to 95% nucleic acid extraction efficiency. These silicon-based strategies demonstrated the value of rapid isolation nucleic acids at low cost. When combined with microfluidic chips, silicon-based extraction strategies not only improve the efficiency of nucleic acid testing, but also facilitate miniaturization and integration of analytical devices . Magnetic-based strategies The magnetic-based isolation approach exploits magnetic particles to extract nucleic acids at the circumstance of external magnetic fields. The commonly utilized magnetic particles include silica-coated, amino-coated, and carboxyl-coated Fe 3 O 4 or γ-Fe 2 O 3 magnetic particles . Compared with silicon-based, solid-phase extraction techniques, a distinct feature of the magnetic particles is ease of manipulation and control using an external magnet. Utilizing the electrostatic interactions between nucleic acids and silica, nucleic acids are adsorbed to the surface of silica-encapsulated magnetic particles under hypersaline and low pH conditions, while the molecules can be eluted again under hyposaline and high pH conditions. The silica-coated magnetic beads allow for DNA extraction from large-volume samples (400 μl) with the help of magnet-guided movement . As a demonstration, Rodriguez-Mateos et al. used a tunable magnet to manipulate the transfer of magnetic beads in different chambers. Based on silica-coated magnetic particles, 470 copies/ml of genomic SARS-CoV-2 RNA can be extracted from wastewater samples for reverse-transcription LAMP (RT-LAMP) detection, and the answer can be read out within 1 h by the unaided eye (Fig. a). The positively-charged magnetic particles are ideal for the nucleic acid phosphate backbone to attach. At a specific salt concentration, the negatively-charged nucleic acid phosphate groups can be absorbed to the surface of magnetic composite particles by positive charges. Thus, the magnetic nanoparticle with a rough surface and a high density of amino groups has been developed for nucleic acid extraction. After magnetic separation and blocking, the magnetic nanoparticles and DNA complexes can be used directly for PCR, omitting complex and time-consuming purification and elution operations . The negative carboxyl-coated magnetic nanoparticles are also made to isolate nucleic acids, which are adsorbed to the surface in high concentrations of polyethylene glycol and sodium chloride solutions . Utilizing these surface-modified magnetic beads, DNA extraction is compatible with downstream amplification. Dignan et al. described an automatic and portable centrifugal microfluidic platform for nucleic acid pre-processing that allows in situ use by non-technical personnel. Moreover, the compatibility of the extracted DNA with LAMP, a technique ideal for point-of-care nucleic acid analysis, was further demonstrated for minimal hardware requirements and adaptability with a colorimetric assay (Fig. b). The magnetic bead methods provide the possibility for automated extraction, of which some commercial automatic nucleic acid extractors exist [KingFisher; ThermoFisher (Waltham, MA, U.S.), QIAcube ® HT; CapitalBio (Beijing, China), and Biomek ® ; Beckman (Miami, FL, U.S.)]. The advantages of magnetic beads in combination with microfluidics for automated nucleic acid extraction with high efficiency have the potential to facilitate the growth of molecular diagnostics; however, magnetic beads in combination with microfluidics are still largely dependent on complex control systems to precisely manipulate magnetic beads, which explains why prevailing commercial products are bulky and expensive, restricting the further application of magnetic beads in POCT. Porous materials-based strategies Several porous materials, such as modified nitrocellulose filter, Finders Technology Associates (FTA) cards, polyethersulfone-based filter paper, and glycan-coated materials, have also been utilized for nucleic acid detection . Porous fibrous materials, such as fibrous papers, are first used for DNA extraction utilizing the physical entanglement of long-chain DNA molecules with the fiber. Small pores lead to strong physical constraints on DNA molecules, which has a positive effect on DNA extraction. The extraction efficiency does not satisfy the need for DNA amplification due to the varying sizes of pores of the fibrous paper . The FTA card, a commercial filter paper used in the forensic field, has been widely applied to other molecular diagnostics. By using cellulose filter paper impregnated with various chemicals to help lyse cellular membranes from samples, the released DNA can be protected from degradation for up to 2 years. More recently, impregnated cellulose paper has been developed for molecular testing of various pathogens, including SARS-CoV-2, leishmaniasis, and malaria . The HIV in separated plasma is directly lysed, and viral nucleic acids are enriched by an integrated, flow-through FTA ® membrane in the concentrator, which enables nucleic acid preparation with high efficiency (Fig. c). The main challenge for nucleic acid testing using FTA cards is that the chemicals, such as guanidine and isopropanol, will inhibit subsequent amplification reactions. To solve the problem, chitosan-modified Fusion 5 filter paper was developed for high-efficiency nucleic acid extraction by combining the strengths of both leveraging the physical entanglement of DNA molecules with the fiber filter paper and the electrostatic adsorption of DNA to the chitosan-modified filter fibers (Fig. d). Similarly, Zhu et al. demonstrated a chitosan-modified capillary assist, a microfluidic-based in situ PCR method, to rapidly extract and detect Zika virus RNA. Based on the features of the chitosan with pH-responsive “on and off” switches, nucleic acids can be adsorbed/desorbed in a lysate/PCR mixture environment, respectively. As described, these strategies incorporate the strengths of different solid-phase materials and increase the performance of nucleic acid extraction in microfluidics. In practical applications, extensive use of these materials is not economical, while using the materials for proper processing or surface modification of common materials can also maintain their functions. Thus, it is believed that cost can be decreased by implementing these strategies after pilot studies. Silicon is a compatible substrate material for nucleic acids because silicon is biocompatible, stable, and has easily modifiable properties . Importantly, when modified by silica or other materials, this composite exhibits the characteristic of adsorbing negatively-charged nucleic acids in low pH and hypersaline conditions, while eluting with high pH and low-salt solutions. Based on this phenomenon, nucleic acids can be purified. Silicon-based materials of various forms have been exploited for nucleic acid extraction in microfluidics, such as silica beads, powder, microfiber filters, and silica gel membranes . Depending on the material properties, silicon-based materials can be utilized in various ways on microchips. For example, silica beads, powders, and commercial nanofilters can be simply placed into the wells or microchannels of the microfluidic chip and assist the extraction of nucleic acids from samples . Surface-modified silica gel membranes can also be used to rapidly purify DNA from pathogens at low cost. For example, Wang et al. introduced a universal and portable system by combining a denaturation bubble-mediated strand exchange amplification reaction with chitooligosaccharide-coated silica gel membranes through which 10 2 –10 8 colony-forming units (CFU)/ml of Vibrio parahaemolyticus were successfully detected, and the existence of the virus was easily visualized. Powell et al. then used the silicon-based microchip to detect hepatitis C virus (HCV), human immunodeficiency virus (HIV), Zika virus, and human papilloma virus multiply and automatically, in which 1.3 µl of meandering microreactors were designed to capture RNA of viruses and perform in situ amplification. In addition to these methods, surface-modified silicon micropillars play a key role in nucleic acid extraction because the geometrical dimension and modifying material properties significantly improve extraction efficiency. Chen et al. proposed a microfluidic platform to extract RNA at low concentrations based on amino-coated silicon micropillars. The microfluidic device integrates micro-pillar arrays within an area of 0.25 cm 2 on the silicon substrate to substantiate a higher extraction efficiency with high surface-to-volume ratio designs. As a benefit from this design, the microfluidic device achieves up to 95% nucleic acid extraction efficiency. These silicon-based strategies demonstrated the value of rapid isolation nucleic acids at low cost. When combined with microfluidic chips, silicon-based extraction strategies not only improve the efficiency of nucleic acid testing, but also facilitate miniaturization and integration of analytical devices . The magnetic-based isolation approach exploits magnetic particles to extract nucleic acids at the circumstance of external magnetic fields. The commonly utilized magnetic particles include silica-coated, amino-coated, and carboxyl-coated Fe 3 O 4 or γ-Fe 2 O 3 magnetic particles . Compared with silicon-based, solid-phase extraction techniques, a distinct feature of the magnetic particles is ease of manipulation and control using an external magnet. Utilizing the electrostatic interactions between nucleic acids and silica, nucleic acids are adsorbed to the surface of silica-encapsulated magnetic particles under hypersaline and low pH conditions, while the molecules can be eluted again under hyposaline and high pH conditions. The silica-coated magnetic beads allow for DNA extraction from large-volume samples (400 μl) with the help of magnet-guided movement . As a demonstration, Rodriguez-Mateos et al. used a tunable magnet to manipulate the transfer of magnetic beads in different chambers. Based on silica-coated magnetic particles, 470 copies/ml of genomic SARS-CoV-2 RNA can be extracted from wastewater samples for reverse-transcription LAMP (RT-LAMP) detection, and the answer can be read out within 1 h by the unaided eye (Fig. a). The positively-charged magnetic particles are ideal for the nucleic acid phosphate backbone to attach. At a specific salt concentration, the negatively-charged nucleic acid phosphate groups can be absorbed to the surface of magnetic composite particles by positive charges. Thus, the magnetic nanoparticle with a rough surface and a high density of amino groups has been developed for nucleic acid extraction. After magnetic separation and blocking, the magnetic nanoparticles and DNA complexes can be used directly for PCR, omitting complex and time-consuming purification and elution operations . The negative carboxyl-coated magnetic nanoparticles are also made to isolate nucleic acids, which are adsorbed to the surface in high concentrations of polyethylene glycol and sodium chloride solutions . Utilizing these surface-modified magnetic beads, DNA extraction is compatible with downstream amplification. Dignan et al. described an automatic and portable centrifugal microfluidic platform for nucleic acid pre-processing that allows in situ use by non-technical personnel. Moreover, the compatibility of the extracted DNA with LAMP, a technique ideal for point-of-care nucleic acid analysis, was further demonstrated for minimal hardware requirements and adaptability with a colorimetric assay (Fig. b). The magnetic bead methods provide the possibility for automated extraction, of which some commercial automatic nucleic acid extractors exist [KingFisher; ThermoFisher (Waltham, MA, U.S.), QIAcube ® HT; CapitalBio (Beijing, China), and Biomek ® ; Beckman (Miami, FL, U.S.)]. The advantages of magnetic beads in combination with microfluidics for automated nucleic acid extraction with high efficiency have the potential to facilitate the growth of molecular diagnostics; however, magnetic beads in combination with microfluidics are still largely dependent on complex control systems to precisely manipulate magnetic beads, which explains why prevailing commercial products are bulky and expensive, restricting the further application of magnetic beads in POCT. Several porous materials, such as modified nitrocellulose filter, Finders Technology Associates (FTA) cards, polyethersulfone-based filter paper, and glycan-coated materials, have also been utilized for nucleic acid detection . Porous fibrous materials, such as fibrous papers, are first used for DNA extraction utilizing the physical entanglement of long-chain DNA molecules with the fiber. Small pores lead to strong physical constraints on DNA molecules, which has a positive effect on DNA extraction. The extraction efficiency does not satisfy the need for DNA amplification due to the varying sizes of pores of the fibrous paper . The FTA card, a commercial filter paper used in the forensic field, has been widely applied to other molecular diagnostics. By using cellulose filter paper impregnated with various chemicals to help lyse cellular membranes from samples, the released DNA can be protected from degradation for up to 2 years. More recently, impregnated cellulose paper has been developed for molecular testing of various pathogens, including SARS-CoV-2, leishmaniasis, and malaria . The HIV in separated plasma is directly lysed, and viral nucleic acids are enriched by an integrated, flow-through FTA ® membrane in the concentrator, which enables nucleic acid preparation with high efficiency (Fig. c). The main challenge for nucleic acid testing using FTA cards is that the chemicals, such as guanidine and isopropanol, will inhibit subsequent amplification reactions. To solve the problem, chitosan-modified Fusion 5 filter paper was developed for high-efficiency nucleic acid extraction by combining the strengths of both leveraging the physical entanglement of DNA molecules with the fiber filter paper and the electrostatic adsorption of DNA to the chitosan-modified filter fibers (Fig. d). Similarly, Zhu et al. demonstrated a chitosan-modified capillary assist, a microfluidic-based in situ PCR method, to rapidly extract and detect Zika virus RNA. Based on the features of the chitosan with pH-responsive “on and off” switches, nucleic acids can be adsorbed/desorbed in a lysate/PCR mixture environment, respectively. As described, these strategies incorporate the strengths of different solid-phase materials and increase the performance of nucleic acid extraction in microfluidics. In practical applications, extensive use of these materials is not economical, while using the materials for proper processing or surface modification of common materials can also maintain their functions. Thus, it is believed that cost can be decreased by implementing these strategies after pilot studies. Nucleic acid testing on microfluidic platforms often uses small sample volumes (< 100 µl), therefore requires amplification of the target nucleic acids with specific probes for conversion to a signal that is convenient for downstream detection (optical, electrical, and magnetic) . Nucleic acid amplification in microfluidics can also speed up the reaction, optimize the limit of detection, lower the sample demand, and increase the detection accuracy . Recently, with the achievement of fast and accurate detection, various nucleic acid amplification methods, including PCR and some isothermal amplification reactions, have been applied in microfluidics. This section will summarize those promising techniques based on microfluidic systems for nucleic acid testing. PCR PCR is a simulation of the DNA replication procedure from organisms, the theory of which is detailed elsewhere and thus will not be discussed herein. PCR can amplify very few target DNA/RNA at an exponential rate, thus making PCR a powerful tool to detect nucleic acids rapidly. In recent decades, many portable microfluidic devices equipped with thermal circulation systems to perform PCR have been developed to satisfy the needs of point-of-care diagnosis . According to different temperature control methods, on-chip PCR can be divided into four types (traditional, continuous-flow, spatially-switched, and convective PCR) . For example, Ji et al. established the direct reverse-transcription quantitative PCR (RT-qPCR) assay on a self-designed microfluidic platform to multiply detect SARS-CoV-2, and influenza A and B viruses in pharyngeal swab samples (Fig. a). Park et al. established a simple pathogen analytic chip by integrating the film-based PCR, electrode, and polydimethylsiloxane-based finger-actuated microfluidic modules. Nevertheless, both works exemplify the common disadvantage of traditional PCR. Thermal cycling is necessary for PCR, which restricts the further miniaturization for the device and shorter testing time. The development of microfluidics-based continuous flow and spatially-switched PCR is essential to solve this problem. Utilizing a long serpentine channel or short straight channel, continuous flow PCR can achieve rapid amplification by actively pushing reagents with a pump outside of chips to three pre-heated zones in sequence and circularly. The operation successfully avoids the transition stage between different reaction temperatures, which significantly reduces the testing time (Fig. b). In another study, Jung et al. proposed a novel Rotary PCR Genetic Analyzer to perform the ultrafast and multiple reverse-transcription PCR in combination with the features of the stationary and flow-through PCR (Fig. c). The PCR microchip will rotate through three thermal blocks with different temperatures for nucleic acid amplification, as follows: I. block at 94 °C for denaturation; II. block at 58 °C for annealing; and III. block at 72 °C for the extension. Through capillary tubes and loops, or even thin disks, convective PCR can rapidly amplify nucleic acids with naturally induced free thermal convection without an external pump. For instance, a cycle olefin polymer microfluidic platform was developed on a fabricated rotating heater stage utilizing a centrifugation-assisted thermal cycle in a ring-structured microchannel for PCR (Fig. d). The reaction solution is driven by thermal convection and continuously exchanged high/low temperatures in the ring-structured microchannel. The whole amplification process can be finished in 10 min and the limit of detection goes to 70.5 pg/channel. As expected, rapid PCR is a powerful tool for both fully-integrated “sample-to-answer” molecular diagnostic systems and multiplex analysis systems. With rapid PCR, the time spent on detecting SARS-CoV-2 is significantly decreased, which helps to control the COVID-19 pandemic efficiently. Isothermal amplification A complex thermocycler is required for PCR, which is inappropriate for POCT. Recently, isothermal amplification methods have been applied to microfluidics, including but not limited to LAMP, recombinase polymerase amplification (RPA), and nucleic acid sequence-based amplification . With these technologies, nucleic acids are amplified at a constant temperature, thus promoting portable POCT devices for molecular diagnostics with low cost and high sensitivity. High-throughput microfluidics-based LAMP analysis enables multiplex detection of infectious diseases . In combination with centrifugal microfluidic systems, LAMP can further promote the automation of nucleic acid detection . A rotate and react SlipChip was developed to visually detect multiple bacteria in parallel by LAMP (Fig. a). With optimized LAMP in the assay, the fluorescent signal-to-noise ratio is approximately fivefold, and the limit of detection reached 7.2 copies/μl genomic DNA. Moreover, the existence of five common digestive bacterial pathogens, including Bacillus cereus , Escherichia coli , Salmonella enterica , Vibrio fluvialis and Vibrio parahaemolyticus , were visualized based on the method in < 60 min. The advantages of LAMP in microfluidics include, but are not limited to rapid reaction and miniaturized detection. Yet, due to the reaction temperature during LAMP (approximately 70 °C), aerosols are inevitably produced, which results in a high rate of false-positive results. Detection specificity, primer design, and temperature control also need to be optimized for LAMP. Moreover, chip designs that implement multiple target detection on one chip are of significant value and should be developed. Furthermore, LAMP is suitable for multiple target detection integrated into one chip, which is of great significance, but still has a large room for growth. RPA can partially reduce the high false-positive rates of LAMP because the relatively low reaction temperature (approximately 37 °C) causes a relatively small evaporation problem . In the RPA system, two opposing primers initiate the DNA synthesis by combining with the recombinant enzymes and the amplification can be completed within 10 min . Therefore, the entire process of RPA is much faster than PCR or LAMP. Microfluidic technology has been demonstrated to further improve the velocity and accuracy of RPA in recent years . For example, Liu et al. developed a microfluidic-integrated lateral flow recombinase polymerase amplification assay to rapidly and sensitively detect SARS-CoV-2, integrating the reverse-transcription RPA (RT-RPA) and a universal lateral flow dipstick detection system into a single microfluidic system (Fig. b). The assay can be finished in approximately 30 min with a 1 copy/μl or 30 copies/sample limit of detection. A wearable microfluidic device was developed by Kong et al. for rapid and straightforward detection of HIV-1 DNA through RPA utilizing body temperature and a cellphone-based fluorescence detection system (Fig. c). The wearable RPA testing can detect target sequences at 100 copies/ml within 24 min, showing great potential for rapid diagnosis of HIV-1-infected infants in resource-limited areas. RPA based on microfluidics has witnessed rapid advances; however, the cost from chip fabrication and reaction consumption is too high and is supposed to be lowered to increase the accessibility of the technique. In addition, the high sensitivity of RPA may influence the amplification of non-specific products, especially when contamination exists. These limitations may affect the application of RPA in microfluidic systems and deserve further optimization. Well-designed primers and probes for different targets are also required to increase the feasibility of RPA-based microfluidic strategies in POCT. Clustered regularly interspaced short palindromic repeats (CRISPR)-based methods for nucleic acid testing Cas13 and Cas12a have the ability to cut nucleic acids indiscriminately, and thus can be developed as detection and diagnostic tools. Cas13 and Cas12a are activated when binding the target DNA or RNA, respectively. Once activated, the proteins then start to cut other nucleic acids nearby, after which the guide RNA that targets pathogen-specific nucleic acids can cut off a quenched fluorescent probe and unleash fluorescence. Based on the theory, Kellner et al. developed a Cas13-based method [Specific High-sensitivity Enzymatic Reporter UnLOCKING (SHERLOCK)], while Broughton et al. developed another Cas12a-based method [DNA Endonuclease Targeted CRISPR Trans Reporter (DETECR)]. In recent years, various CRISPR-based nucleic acid assays have emerged . Traditional CRISPR-based methods are usually time-consuming and labor-intensive because of multiple procedures encompassing nucleic acid extraction, amplification, and CRISPR detection. The likelihood of false-positive results may be increased for exposing liquid to air. Given the above, the CRISPR-based systems are in urgent need of optimization. A pneumatically-controlled microfluidic platform that can run 24 assays in parallel was designed for CRISPR-Cas12a and CRISPR-Cas13a detection applications . The system is equipped with a fluorescence detection device, thus can automatically detect femtomolar DNA and RNA samples bypassing nucleic acid amplification. Chen et al. integrated recombinase-aided amplification with CRISPR-Cas12a system in centrifugal microfluidics (Fig. a). This work overcomes the difficulty in integrating these two processes because Cas12a can digest the template DNA and inhibit the amplification process. In addition, Chen et al. further pre-stored reaction reagents into centrifugal microfluidics to complete the whole process automatically. In another work, Silva et al. developed an amplification-free CRISPR/Cas12a- and smartphone-based diagnostic method to detect SARS-CoV-2 (Fig. b). This assay is referred to as a cellphone-based amplification-free system with CRISPR/Cas-dependent enzyme, relying on smartphone imaging of a catalase-generated gas bubble signal in a microfluidic channel. Less than 50 copies/µl nucleic acids can be sensitively detected without pre-amplification and the full process from sample inlet to signal readout takes only 71 min. PCR is a simulation of the DNA replication procedure from organisms, the theory of which is detailed elsewhere and thus will not be discussed herein. PCR can amplify very few target DNA/RNA at an exponential rate, thus making PCR a powerful tool to detect nucleic acids rapidly. In recent decades, many portable microfluidic devices equipped with thermal circulation systems to perform PCR have been developed to satisfy the needs of point-of-care diagnosis . According to different temperature control methods, on-chip PCR can be divided into four types (traditional, continuous-flow, spatially-switched, and convective PCR) . For example, Ji et al. established the direct reverse-transcription quantitative PCR (RT-qPCR) assay on a self-designed microfluidic platform to multiply detect SARS-CoV-2, and influenza A and B viruses in pharyngeal swab samples (Fig. a). Park et al. established a simple pathogen analytic chip by integrating the film-based PCR, electrode, and polydimethylsiloxane-based finger-actuated microfluidic modules. Nevertheless, both works exemplify the common disadvantage of traditional PCR. Thermal cycling is necessary for PCR, which restricts the further miniaturization for the device and shorter testing time. The development of microfluidics-based continuous flow and spatially-switched PCR is essential to solve this problem. Utilizing a long serpentine channel or short straight channel, continuous flow PCR can achieve rapid amplification by actively pushing reagents with a pump outside of chips to three pre-heated zones in sequence and circularly. The operation successfully avoids the transition stage between different reaction temperatures, which significantly reduces the testing time (Fig. b). In another study, Jung et al. proposed a novel Rotary PCR Genetic Analyzer to perform the ultrafast and multiple reverse-transcription PCR in combination with the features of the stationary and flow-through PCR (Fig. c). The PCR microchip will rotate through three thermal blocks with different temperatures for nucleic acid amplification, as follows: I. block at 94 °C for denaturation; II. block at 58 °C for annealing; and III. block at 72 °C for the extension. Through capillary tubes and loops, or even thin disks, convective PCR can rapidly amplify nucleic acids with naturally induced free thermal convection without an external pump. For instance, a cycle olefin polymer microfluidic platform was developed on a fabricated rotating heater stage utilizing a centrifugation-assisted thermal cycle in a ring-structured microchannel for PCR (Fig. d). The reaction solution is driven by thermal convection and continuously exchanged high/low temperatures in the ring-structured microchannel. The whole amplification process can be finished in 10 min and the limit of detection goes to 70.5 pg/channel. As expected, rapid PCR is a powerful tool for both fully-integrated “sample-to-answer” molecular diagnostic systems and multiplex analysis systems. With rapid PCR, the time spent on detecting SARS-CoV-2 is significantly decreased, which helps to control the COVID-19 pandemic efficiently. A complex thermocycler is required for PCR, which is inappropriate for POCT. Recently, isothermal amplification methods have been applied to microfluidics, including but not limited to LAMP, recombinase polymerase amplification (RPA), and nucleic acid sequence-based amplification . With these technologies, nucleic acids are amplified at a constant temperature, thus promoting portable POCT devices for molecular diagnostics with low cost and high sensitivity. High-throughput microfluidics-based LAMP analysis enables multiplex detection of infectious diseases . In combination with centrifugal microfluidic systems, LAMP can further promote the automation of nucleic acid detection . A rotate and react SlipChip was developed to visually detect multiple bacteria in parallel by LAMP (Fig. a). With optimized LAMP in the assay, the fluorescent signal-to-noise ratio is approximately fivefold, and the limit of detection reached 7.2 copies/μl genomic DNA. Moreover, the existence of five common digestive bacterial pathogens, including Bacillus cereus , Escherichia coli , Salmonella enterica , Vibrio fluvialis and Vibrio parahaemolyticus , were visualized based on the method in < 60 min. The advantages of LAMP in microfluidics include, but are not limited to rapid reaction and miniaturized detection. Yet, due to the reaction temperature during LAMP (approximately 70 °C), aerosols are inevitably produced, which results in a high rate of false-positive results. Detection specificity, primer design, and temperature control also need to be optimized for LAMP. Moreover, chip designs that implement multiple target detection on one chip are of significant value and should be developed. Furthermore, LAMP is suitable for multiple target detection integrated into one chip, which is of great significance, but still has a large room for growth. RPA can partially reduce the high false-positive rates of LAMP because the relatively low reaction temperature (approximately 37 °C) causes a relatively small evaporation problem . In the RPA system, two opposing primers initiate the DNA synthesis by combining with the recombinant enzymes and the amplification can be completed within 10 min . Therefore, the entire process of RPA is much faster than PCR or LAMP. Microfluidic technology has been demonstrated to further improve the velocity and accuracy of RPA in recent years . For example, Liu et al. developed a microfluidic-integrated lateral flow recombinase polymerase amplification assay to rapidly and sensitively detect SARS-CoV-2, integrating the reverse-transcription RPA (RT-RPA) and a universal lateral flow dipstick detection system into a single microfluidic system (Fig. b). The assay can be finished in approximately 30 min with a 1 copy/μl or 30 copies/sample limit of detection. A wearable microfluidic device was developed by Kong et al. for rapid and straightforward detection of HIV-1 DNA through RPA utilizing body temperature and a cellphone-based fluorescence detection system (Fig. c). The wearable RPA testing can detect target sequences at 100 copies/ml within 24 min, showing great potential for rapid diagnosis of HIV-1-infected infants in resource-limited areas. RPA based on microfluidics has witnessed rapid advances; however, the cost from chip fabrication and reaction consumption is too high and is supposed to be lowered to increase the accessibility of the technique. In addition, the high sensitivity of RPA may influence the amplification of non-specific products, especially when contamination exists. These limitations may affect the application of RPA in microfluidic systems and deserve further optimization. Well-designed primers and probes for different targets are also required to increase the feasibility of RPA-based microfluidic strategies in POCT. Cas13 and Cas12a have the ability to cut nucleic acids indiscriminately, and thus can be developed as detection and diagnostic tools. Cas13 and Cas12a are activated when binding the target DNA or RNA, respectively. Once activated, the proteins then start to cut other nucleic acids nearby, after which the guide RNA that targets pathogen-specific nucleic acids can cut off a quenched fluorescent probe and unleash fluorescence. Based on the theory, Kellner et al. developed a Cas13-based method [Specific High-sensitivity Enzymatic Reporter UnLOCKING (SHERLOCK)], while Broughton et al. developed another Cas12a-based method [DNA Endonuclease Targeted CRISPR Trans Reporter (DETECR)]. In recent years, various CRISPR-based nucleic acid assays have emerged . Traditional CRISPR-based methods are usually time-consuming and labor-intensive because of multiple procedures encompassing nucleic acid extraction, amplification, and CRISPR detection. The likelihood of false-positive results may be increased for exposing liquid to air. Given the above, the CRISPR-based systems are in urgent need of optimization. A pneumatically-controlled microfluidic platform that can run 24 assays in parallel was designed for CRISPR-Cas12a and CRISPR-Cas13a detection applications . The system is equipped with a fluorescence detection device, thus can automatically detect femtomolar DNA and RNA samples bypassing nucleic acid amplification. Chen et al. integrated recombinase-aided amplification with CRISPR-Cas12a system in centrifugal microfluidics (Fig. a). This work overcomes the difficulty in integrating these two processes because Cas12a can digest the template DNA and inhibit the amplification process. In addition, Chen et al. further pre-stored reaction reagents into centrifugal microfluidics to complete the whole process automatically. In another work, Silva et al. developed an amplification-free CRISPR/Cas12a- and smartphone-based diagnostic method to detect SARS-CoV-2 (Fig. b). This assay is referred to as a cellphone-based amplification-free system with CRISPR/Cas-dependent enzyme, relying on smartphone imaging of a catalase-generated gas bubble signal in a microfluidic channel. Less than 50 copies/µl nucleic acids can be sensitively detected without pre-amplification and the full process from sample inlet to signal readout takes only 71 min. As the final step of the nucleic acid testing, signal detection directly reflects the diagnostic result and is determinative in developing efficient, sensitive, and accurate POCT. Signals can be read out through various methods, such as fluorescence-based, electrochemical, colorimetric, and magnetic-based strategies. In this section, we will introduce the principle of each approach and make a comparison for infectious disease molecular diagnostics in microfluidics. Fluorescence-based strategies are extensively applied to POCT to diagnose infectious diseases owing to their significant benefits of superior sensitivity, low cost, easy operation, and instant analysis . These strategies make use of labeling fluorophores, such as fluorescent dyes and nanomaterials, to produce detectable signals (fluorescence enhancement or quenching). This finding suggests that fluorescence-based strategies can be categorized into direct fluorescence labeling, “signal-on” and “signal-off” fluorescence detection . Direct fluorescence labeling detection makes use of special fluorescent tags labeling specific ligands to generate a certain amount of fluorescence when selectively binding to the targets. For “signal-on” fluorescence detection, the quality of the fluorescence signal is positively correlated with the target quantity. The fluorescence intensity is insignificant in the absence of the target and detectable as the target is sufficient. Conversely, the fluorescence intensity of the “signal-off” fluorescence detection is negatively correlated with the target quantity, which is initially at a maximum and decreasing, while the target is enhancing. For example, by utilizing the target-dependent trans-cleavage mechanism of CRISPR-Cas13a, Tian et al. developed a novel sensing strategy to detect RNA directly bypassing reverse transcription (Fig. a). Binding to a complementary target RNA, the CRISPR–Cas13–RNA complex can be activated, triggering collateral cleavage of a non-specific RNA reporter in trans. The fluorescently-labeled reporter [fluorophore (F)] is quenched by a quencher (Q) when intact and generates fluorescence when cleaved by the activated complex. Electrochemical detection has advantages, such as rapid detection, easy fabrication, low cost, portability, and self-control, making it a powerful analytical method for POCT applications. Based on a graphene field-effect transistor, Gao et al. developed a nano-biosensor to multiply detect antigens of Lyme disease from Borrelia burgdorferi bacteria, exhibiting a 2 pg/ml limit of detection (Fig. b). Colorimetric assays have been applied for POCT applications, benefitting from the dominance of portability, low cost, ease of preparation, and naked eye readout. Colorimetric detection can convert the information of target nucleic acid existence to visible color change utilizing oxidation of peroxidase or peroxidase-like nanomaterials, aggregation of nanomaterials, and addition of dye indicators . Notably, gold nanoparticles are broadly applied in colorimetric strategy establishment and have attached increasing interest to develop colorimetric POCT platforms for on-site infectious disease diagnostics because of the ability to cause fast and significant color changes . Utilizing an integrated centrifugal microfluidic device , foodborne pathogens within a contaminated milk sample can be automatically detected down to 10 bacterial cell levels, the outcome of which can be read out by the unaided eye in 65 min (Fig. c). Magnetic-based sensing methods can sensitively detect analytes by employing the magnetic materials, and have obtained a surging interest for POCT applications in recent decades. Magnetic-based sensing methods have some unique advantages, such as low-cost magnetic materials rather than expensive optics components. Even so, the detection efficiency is improved and the sample preparation time is decreased utilizing magnetic fields . Moreover, magnetic-based sensing results exhibit great specificity, sensitivity, and high signal-to-noise ratio because of the insignificant magnetic background signal of biological samples . A magnetic tunneling junction-based biosensor was integrated onto a portable microchip platform by Sharma et al. for the multiplex detection of pathogens (Fig. d). The biosensor sensitively detects extracted nucleic acids below the nanomole range from pathogens. Despite the outstanding performance of the detection methods mentioned above, drawbacks still exist. These methods were compared (Table ), including some applications with detailed information (both advantages and disadvantages). With the development of microfluidics, micro-electro-mechanical system, nanotechnology, and materials science, the application of microfluidic chips for infectious disease detection has been promoted continuously . The miniaturized devices and precise manipulations of fluids facilitate the accuracy and economy of diagnosis. Therefore, great efforts have been made to optimize and innovate chips for further development, which leads to different microfluidic chips of various structures and functions. Herein, we briefly introduced a few common types of microfluidic platforms and compared their features (advantages and disadvantages). Furthermore, most of the examples listed below mainly focus on targeting SARS-CoV-2. Lab on a cartridge chip (LOCC) LOCC is the most common micro total analysis system, in which manipulation is highly miniaturized, integrated, automated, and parallelized from sample input and preparation, flow control, and liquid detection . Manipulation of fluids are performed by well-designed geometries and the interplay of multiple physical effects, such as pressure gradients, capillarity, electro-kinetics, magnetic fields, and sound waves . LOCC shows excellent advantages in high-throughput screens and multiple assays with fast analysis, small sample volume, low power consumption, and efficient control and manipulation; however, LOCC devices are so delicate that it is also difficult to fabricate, package, and interface, while multiplexing and reuse are of great challenge . Compared with other platforms, LOCC exhibits several exclusive merits in maximum application diversity and best compatibility for technologies, while drawbacks are also obvious in the high complexity and weak reproducibility. The dependence on external pumps, which are usually huge and expensive, further constrains its usage for POCT. During the COVID-19 outbreak, a large amount of attention has been paid to LOCC. Meanwhile, some novel chips integrated with various techniques emerged. For example, smartphones are now widely available as portable analytic devices and have great potential to integrate with LOCC. Sun et al. fabricated a microfluidic chip that can multiply amplify specific nucleic acid sequences of five pathogens, including SARS-CoV-2 by LAMP, and detected them at the end of the reactions by a smartphone in 1 h. As another example, Sundah et al. formed a molecular switch [catalytic amplification by the transition-state molecular switch (CATCH)], which can directly and sensitively detect SARS-CoV-2 RNA targets with a smartphone. CATCH is compatible with portable LOCC and achieves superior performance (approximately 8 RNA copies/μl; < 1 h at room temperature) . In addition, some driving forces are also used in LOCC equipment for molecular diagnostics, such as vacuum, stretching, and electric fields. Kang et al. demonstrated an ultrafast and real-time nano-plasmonic on-chip PCR to rapidly and quantitatively diagnose COVID-19 on site using the vacuum-driven plasmofluidic PCR chip. Li et al. subsequently developed a stretching-driven microfluidic chip that realized the diagnosis of COVID-19. The platform adopted a RT-LAMP amplification system to decide whether the sample qualitatively tested positive or negative. Subsequently, Ramachandran et al. achieved a proper electric field gradient utilizing isotachophoresis (ITP), a selective ionic focusing technique-to implement in microfluidics. Through ITP, target RNA within raw nasopharyngeal swab samples can be automatically purified. Then, Ramachandran et al. incorporated this ITP purification with LAMP and the ITP-enhanced CRISPR assay for SARS-CoV-2 detection in approximately 35 min from both contrived and clinical nasopharyngeal swab samples. Additionally, new ideas are being launched all the time. Jadhav et al. proposed a diagnostic protocol based on surface-enhanced Raman spectroscopy coupled with microfluidic devices that contain integrated microchannels functionalized with vertically-aligned aurum/argentum-coated carbon nanotubes or with disposable electrospun micro/nano-filter membranes. This device adsorbs viruses from various biological fluids/secretions, such as saliva, the nasopharynx, and tears. Therefore, the viral titer is enriched and the viruses can be accurately identified from the Raman signatures. Lab on a disc (LOAD) LOAD is a centrifugal microfluidic platform, in which all the processes are controlled by the frequency protocol of a rotating micro-structured substrate . The LOAD device is characterized by utilizing centrifugal forces as significant driving forces. Fluids are also controlled by capillary, Euler, and Coriolis forces. With a centrifugal unit, assays are conducted by sequential liquid operations from radial inward-to-outward positions, leaving out extra external tubes, pumps, actuators, and active valves. Briefly, the sole control method eases manipulation. The forces on liquids at the same distance from the center of the LOAD and in the identical microfluidic channel are equal, making the repeats of channel structure possible. Thus, it is easier and more economical to design and fabricate LOAD devices than conventional LOCC ones, while reactions are highly independent and parallelized; however, because of the high mechanical strength of the centrifugal equipment, the available materials of chips are limited and small volumes are hard to be performed. Simultaneously, most LOAD devices are for one-time use only, which is high-cost for large-scale assays . LOAD is regarded as one of the most promising microfluidic devices and has received great attention from researchers and manufacturers in recent decades. As a result, LOAD has been widely accepted and utilized in molecular diagnostics of infectious pathogens , especially during the outbreak of COVID-19. For example, at the end of 2020 Ji et al. showed the direct RT-qPCR assay to detect SARS-CoV-2, and influenza A and B viral infections in parallel from pharyngeal swab samples rapidly and automatically. Then, Xiong et al. presented a disk-like microfluidic platform integrated with LAMP for rapid, accurate, and simultaneous detection of seven human respiratory coronaviruses, including SARS-CoV-2, within 40 min. In early 2021, de Oliveira et al. displayed a polystyrene-toner centrifugal microfluidic chip manually controlled by a fidget spinner for molecular diagnostics of COVID-19 by RT-LAMP. Subsequently, Dignan et al. revealed an automated, portable, centrifugal microdevice to purify SARS-CoV-2 RNA directly from buccal swab cuttings. Xiong et al. presented a small-volume rotating microfluidic fluorescence chip-integrated aerosol SARS-CoV-2 sampling system with a detection limit of 10 copies/μl and the shortest cycle threshold of 15 min. Soares et al. recently reported the development of an integrated modular centrifugal microfluidic platform to detect SARS-CoV-2 RNA by LAMP directly from heat-inactivated nasopharyngeal swab samples. These examples demonstrate a huge advantage in applying LOAD in molecular diagnostics of COVID-19 and good prospects for growth. Microfluidic paper-based analytical devices (μPADs) In 1945, Müller and Clegg first introduced the microfluidic channel on paper by using filter paper and paraffin. In 2007 the Whitesides group created the first functional paper platform to test protein and glucose. The paper has become an ideal substrate for microfluidics. Papers have intrinsic properties, such as a hydrophilic and porous structure, excellent biocompatibility, lightweight, flexibility, fold ability, low cost, ease of use, and availability. Classic μPADs are composed of hydrophilic/hydrophobic structures built on paper substrates. Based on the three-dimension structure, μPADs can be classified into two dimensional (2D) and three dimensional (3D) μPADs. 2D μPADs are produced by patterning hydrophobic borders to form microfluidic channels, while 3D μPADs are usually made from stacking of 2D microfluidic paper layers, and sometimes by paper folding, slip techniques, open channels, and 3D-printing . Aqueous solutions or biological fluids on μPADs are mainly controlled by capillary forces without external power sources, thus facilitating reagent pre-storage, sample manipulation, and multiplex detection. Nevertheless, precise control of flow and multiple assays are blocked while lacking detection speed, sensitivity, and reusability . As an extraordinary microfluidic platform, μPADs have been greatly promoted and developed for molecular diagnostics of infectious diseases, such as HCV, HIV, and SARS-CoV-2 . To detect HCV selectively and sensitively, Teengam et al. developed a novel fluorescent paper-based biosensor employing a highly specific pyrrolidinyl peptide nucleic acid probe. The nucleic acid was covalently immobilized onto partially oxidized cellulose paper through reductive alkylation between the amine and aldehyde groups, while the detection was based on fluorescence. The signals can be read out by a custom-made portable fluorescent camera gadget combined with a cellphone camera. Subsequently, Lu et al. constructed a flexible paper-based electrode based on a nickel metal–organic framework composite/aurum nanoparticles/carbon nanotubes/polyvinyl alcohol for target HIV DNA detection by DNA hybridization using methylene blue as a redox indicator. Recently, Chowdury et al. proposed a hypothetical design of a μPADs point-of-care platform for COVID-19 analyte detection using unprocessed patient-derived saliva, combined with LAMP and a handheld image acquisition technique. Lateral flow assay (LFA) chips Lateral flow tests drive liquids by capillary forces and control fluid movement by the wettability and characteristic structure of the porous or micro-structured substrate. The lateral flow device consists of sample, conjugate, incubation and detection, and absorbent pads. Nucleic acid molecules in a LFA recognize specific conjugates pre-stored on the conjugate pad and combined as complexes. When the fluid pass through the incubation and detection pad, the complexes will be captured by the capture molecules located on the test and control line, showing results that can be read directly by the unaided eye. Typically, LFA can be completed in 2–15 min, which is faster than traditional assays. Due to its special mechanism, LFA requires few operations and omits extra equipment, which is user-friendly. It is convenient for fabrication and miniaturization, while paper-based substrate also has a low cost. Yet, it is only for qualitative analysis and has great difficulty for quantitative detection, while multiplexing capability and throughput are so limited that only one kind of nucleic acid that is sufficient can be tested at a time . Even though most applications of LFA are focused on immunoassay, applying LFA for molecular diagnostics in microfluidic chips is also efficient and popular . Using hepatitis B virus, HIV, and SARS-CoV-2 LFA as examples, Gong et al. presented an upconversion nanoparticle-based LFA platform and demonstrated the universality of this miniaturized and portable platform by sensitively and quantitatively detecting several targets, such as hepatitis B virus nucleic acids. Furthermore, Fu et al. showed a novel surface-enhanced Raman spectroscopy-based LFA for the quantitative analysis of low concentration HIV-1 DNA. To rapidly and sensitively detect SARS-CoV-2, Liu et al. developed the microfluidic-integrated lateral flow RPA assay, combining the RT-RPA and a universal lateral flow dipstick detection system into a single microfluidic system. The applications of different microfluidic platforms are varied in specific research, taking advantage of the platform capabilities and merits. LOCC is the most inclusive platform for application diversity and technology compatibility with the maximum development possibilities because of available valves, pumps, and channels. Therefore, we hope and suggest that the most novel research be conducted in LOCC as a first attempt and that conditions are optimized. In addition, more efficient and accurate approaches are expected to be discovered and utilized in the system. LOAD succeeded in precisely controlling liquids from available LOCC devices and showed unique advantages in the solo driver by centrifugal forces without an external actuator, while parallel reactions could be individual and synchronized. Thus, LOAD will be the mainstream of future microfluidic platforms with decreased artificial operations, requiring more mature and automatic techniques. The μPAD platforms integrate the advantages of both LOCC and paper material, and are suitable for inexpensive and one-time diagnosis. Therefore, future development should focus on technologies that are convenient and well-developed. Furthermore, LFA is a highly suitable for unaided eye detection, which is expected to reduce sample consumption and accelerate testing speed. The detailed comparison of the platforms is shown in Table . LOCC is the most common micro total analysis system, in which manipulation is highly miniaturized, integrated, automated, and parallelized from sample input and preparation, flow control, and liquid detection . Manipulation of fluids are performed by well-designed geometries and the interplay of multiple physical effects, such as pressure gradients, capillarity, electro-kinetics, magnetic fields, and sound waves . LOCC shows excellent advantages in high-throughput screens and multiple assays with fast analysis, small sample volume, low power consumption, and efficient control and manipulation; however, LOCC devices are so delicate that it is also difficult to fabricate, package, and interface, while multiplexing and reuse are of great challenge . Compared with other platforms, LOCC exhibits several exclusive merits in maximum application diversity and best compatibility for technologies, while drawbacks are also obvious in the high complexity and weak reproducibility. The dependence on external pumps, which are usually huge and expensive, further constrains its usage for POCT. During the COVID-19 outbreak, a large amount of attention has been paid to LOCC. Meanwhile, some novel chips integrated with various techniques emerged. For example, smartphones are now widely available as portable analytic devices and have great potential to integrate with LOCC. Sun et al. fabricated a microfluidic chip that can multiply amplify specific nucleic acid sequences of five pathogens, including SARS-CoV-2 by LAMP, and detected them at the end of the reactions by a smartphone in 1 h. As another example, Sundah et al. formed a molecular switch [catalytic amplification by the transition-state molecular switch (CATCH)], which can directly and sensitively detect SARS-CoV-2 RNA targets with a smartphone. CATCH is compatible with portable LOCC and achieves superior performance (approximately 8 RNA copies/μl; < 1 h at room temperature) . In addition, some driving forces are also used in LOCC equipment for molecular diagnostics, such as vacuum, stretching, and electric fields. Kang et al. demonstrated an ultrafast and real-time nano-plasmonic on-chip PCR to rapidly and quantitatively diagnose COVID-19 on site using the vacuum-driven plasmofluidic PCR chip. Li et al. subsequently developed a stretching-driven microfluidic chip that realized the diagnosis of COVID-19. The platform adopted a RT-LAMP amplification system to decide whether the sample qualitatively tested positive or negative. Subsequently, Ramachandran et al. achieved a proper electric field gradient utilizing isotachophoresis (ITP), a selective ionic focusing technique-to implement in microfluidics. Through ITP, target RNA within raw nasopharyngeal swab samples can be automatically purified. Then, Ramachandran et al. incorporated this ITP purification with LAMP and the ITP-enhanced CRISPR assay for SARS-CoV-2 detection in approximately 35 min from both contrived and clinical nasopharyngeal swab samples. Additionally, new ideas are being launched all the time. Jadhav et al. proposed a diagnostic protocol based on surface-enhanced Raman spectroscopy coupled with microfluidic devices that contain integrated microchannels functionalized with vertically-aligned aurum/argentum-coated carbon nanotubes or with disposable electrospun micro/nano-filter membranes. This device adsorbs viruses from various biological fluids/secretions, such as saliva, the nasopharynx, and tears. Therefore, the viral titer is enriched and the viruses can be accurately identified from the Raman signatures. LOAD is a centrifugal microfluidic platform, in which all the processes are controlled by the frequency protocol of a rotating micro-structured substrate . The LOAD device is characterized by utilizing centrifugal forces as significant driving forces. Fluids are also controlled by capillary, Euler, and Coriolis forces. With a centrifugal unit, assays are conducted by sequential liquid operations from radial inward-to-outward positions, leaving out extra external tubes, pumps, actuators, and active valves. Briefly, the sole control method eases manipulation. The forces on liquids at the same distance from the center of the LOAD and in the identical microfluidic channel are equal, making the repeats of channel structure possible. Thus, it is easier and more economical to design and fabricate LOAD devices than conventional LOCC ones, while reactions are highly independent and parallelized; however, because of the high mechanical strength of the centrifugal equipment, the available materials of chips are limited and small volumes are hard to be performed. Simultaneously, most LOAD devices are for one-time use only, which is high-cost for large-scale assays . LOAD is regarded as one of the most promising microfluidic devices and has received great attention from researchers and manufacturers in recent decades. As a result, LOAD has been widely accepted and utilized in molecular diagnostics of infectious pathogens , especially during the outbreak of COVID-19. For example, at the end of 2020 Ji et al. showed the direct RT-qPCR assay to detect SARS-CoV-2, and influenza A and B viral infections in parallel from pharyngeal swab samples rapidly and automatically. Then, Xiong et al. presented a disk-like microfluidic platform integrated with LAMP for rapid, accurate, and simultaneous detection of seven human respiratory coronaviruses, including SARS-CoV-2, within 40 min. In early 2021, de Oliveira et al. displayed a polystyrene-toner centrifugal microfluidic chip manually controlled by a fidget spinner for molecular diagnostics of COVID-19 by RT-LAMP. Subsequently, Dignan et al. revealed an automated, portable, centrifugal microdevice to purify SARS-CoV-2 RNA directly from buccal swab cuttings. Xiong et al. presented a small-volume rotating microfluidic fluorescence chip-integrated aerosol SARS-CoV-2 sampling system with a detection limit of 10 copies/μl and the shortest cycle threshold of 15 min. Soares et al. recently reported the development of an integrated modular centrifugal microfluidic platform to detect SARS-CoV-2 RNA by LAMP directly from heat-inactivated nasopharyngeal swab samples. These examples demonstrate a huge advantage in applying LOAD in molecular diagnostics of COVID-19 and good prospects for growth. In 1945, Müller and Clegg first introduced the microfluidic channel on paper by using filter paper and paraffin. In 2007 the Whitesides group created the first functional paper platform to test protein and glucose. The paper has become an ideal substrate for microfluidics. Papers have intrinsic properties, such as a hydrophilic and porous structure, excellent biocompatibility, lightweight, flexibility, fold ability, low cost, ease of use, and availability. Classic μPADs are composed of hydrophilic/hydrophobic structures built on paper substrates. Based on the three-dimension structure, μPADs can be classified into two dimensional (2D) and three dimensional (3D) μPADs. 2D μPADs are produced by patterning hydrophobic borders to form microfluidic channels, while 3D μPADs are usually made from stacking of 2D microfluidic paper layers, and sometimes by paper folding, slip techniques, open channels, and 3D-printing . Aqueous solutions or biological fluids on μPADs are mainly controlled by capillary forces without external power sources, thus facilitating reagent pre-storage, sample manipulation, and multiplex detection. Nevertheless, precise control of flow and multiple assays are blocked while lacking detection speed, sensitivity, and reusability . As an extraordinary microfluidic platform, μPADs have been greatly promoted and developed for molecular diagnostics of infectious diseases, such as HCV, HIV, and SARS-CoV-2 . To detect HCV selectively and sensitively, Teengam et al. developed a novel fluorescent paper-based biosensor employing a highly specific pyrrolidinyl peptide nucleic acid probe. The nucleic acid was covalently immobilized onto partially oxidized cellulose paper through reductive alkylation between the amine and aldehyde groups, while the detection was based on fluorescence. The signals can be read out by a custom-made portable fluorescent camera gadget combined with a cellphone camera. Subsequently, Lu et al. constructed a flexible paper-based electrode based on a nickel metal–organic framework composite/aurum nanoparticles/carbon nanotubes/polyvinyl alcohol for target HIV DNA detection by DNA hybridization using methylene blue as a redox indicator. Recently, Chowdury et al. proposed a hypothetical design of a μPADs point-of-care platform for COVID-19 analyte detection using unprocessed patient-derived saliva, combined with LAMP and a handheld image acquisition technique. Lateral flow tests drive liquids by capillary forces and control fluid movement by the wettability and characteristic structure of the porous or micro-structured substrate. The lateral flow device consists of sample, conjugate, incubation and detection, and absorbent pads. Nucleic acid molecules in a LFA recognize specific conjugates pre-stored on the conjugate pad and combined as complexes. When the fluid pass through the incubation and detection pad, the complexes will be captured by the capture molecules located on the test and control line, showing results that can be read directly by the unaided eye. Typically, LFA can be completed in 2–15 min, which is faster than traditional assays. Due to its special mechanism, LFA requires few operations and omits extra equipment, which is user-friendly. It is convenient for fabrication and miniaturization, while paper-based substrate also has a low cost. Yet, it is only for qualitative analysis and has great difficulty for quantitative detection, while multiplexing capability and throughput are so limited that only one kind of nucleic acid that is sufficient can be tested at a time . Even though most applications of LFA are focused on immunoassay, applying LFA for molecular diagnostics in microfluidic chips is also efficient and popular . Using hepatitis B virus, HIV, and SARS-CoV-2 LFA as examples, Gong et al. presented an upconversion nanoparticle-based LFA platform and demonstrated the universality of this miniaturized and portable platform by sensitively and quantitatively detecting several targets, such as hepatitis B virus nucleic acids. Furthermore, Fu et al. showed a novel surface-enhanced Raman spectroscopy-based LFA for the quantitative analysis of low concentration HIV-1 DNA. To rapidly and sensitively detect SARS-CoV-2, Liu et al. developed the microfluidic-integrated lateral flow RPA assay, combining the RT-RPA and a universal lateral flow dipstick detection system into a single microfluidic system. The applications of different microfluidic platforms are varied in specific research, taking advantage of the platform capabilities and merits. LOCC is the most inclusive platform for application diversity and technology compatibility with the maximum development possibilities because of available valves, pumps, and channels. Therefore, we hope and suggest that the most novel research be conducted in LOCC as a first attempt and that conditions are optimized. In addition, more efficient and accurate approaches are expected to be discovered and utilized in the system. LOAD succeeded in precisely controlling liquids from available LOCC devices and showed unique advantages in the solo driver by centrifugal forces without an external actuator, while parallel reactions could be individual and synchronized. Thus, LOAD will be the mainstream of future microfluidic platforms with decreased artificial operations, requiring more mature and automatic techniques. The μPAD platforms integrate the advantages of both LOCC and paper material, and are suitable for inexpensive and one-time diagnosis. Therefore, future development should focus on technologies that are convenient and well-developed. Furthermore, LFA is a highly suitable for unaided eye detection, which is expected to reduce sample consumption and accelerate testing speed. The detailed comparison of the platforms is shown in Table . A digital assay partitions a sample into many microreactors, and each contains a discrete number of target molecules . A digital assay offers significant advantages for performing absolutely quantitative assays by simultaneously and individually conducting thousands of parallel biochemical experiments in micrometer-sized compartments instead of the continuous phase. Reactions in compartments can reduce sample volumes, improve reaction efficiency, and easily integrate with other analytic techniques without the need for networks of channels, pumps, valves, and compact design compared to traditional microfluidics . The following two approaches are used for the digital assay to accomplish uniform and precise compartmentalization of solutions, including reagents and samples, such as cells, nucleic acids, and other particles or molecules: (1) droplet emulsions exploiting the interfacial instability of liquids; and (2) array separation through the geometric constraints of the device. In the former method, droplets containing reagents and samples in microchannels can be generated by passive methods, such as co-flow, cross-flow, flow-focusing, step emulsification, microchannel emulsification, and membrane emulsification through viscous shear forces and variations of channel confinement , or by active methods with the aid of additional energy input through electrical, magnetic, thermal, and mechanical controls . In the latter method, better uniformity of liquid volume in microfluidic chambers is partitioned by restricting to spatial structures of the same size, for example, microwell and surface arrays . Notably, droplets are the mainstream partitions and can also be generated and manipulated on an array of electrodes, which is based on digital microfluidics (DMF). Electrowetting on dielectric is one of the most intensively studied theories in DMF because electrowetting on dielectric is able to control over fluid shape and flow through asymmetric electrical signals on different sides, making precise manipulations of single droplets possible . Basic manipulations of droplets in DMF include sorting, splitting, and merging , which can be applied to various analytic fields, especially in molecule detection . The digital nucleic acid assay is third-generation technology of molecular diagnostics after conventional PCR and quantitative real-time PCR (qPCR), parallel to high-throughput sequencing and liquid biopsies. The digital nucleic acid has developed quickly in the field of molecular diagnostics to target infectious pathogens in the last two decades . The absolute quantification of digital nucleic acid assay begins with packaging samples and reagents into divided compartments to ensure that every target sequence has the same probability to enter every discrete partition. Theoretically, every partition can be assigned a few target sequences or none as an independent micro-reaction system. Through the many kinds of sensing mechanisms discussed above, compartments with the target sequences of microorganisms producing signals above a particular threshold can be visualized by the unaided eye or machines and labeled as positive, while the other compartments producing signals below the threshold are labeled as negative, which makes the signal of every partition Boolean. Therefore, by calculating the number of compartments generated and the positive rate after the reaction, the original copies of the tested samples can be reconciled through the Poisson distribution formula without a standard curve, as is necessary for conventional quantitative detection, like qPCR . Compared with traditional molecular diagnostic technology, the digital nucleic acid assay is much more automatic and integrated with higher analysis velocity and sensitivity, fewer reagents, and lower possibility of pollution, while also easier to design and fabricate. For these reasons, the application of digital assay, especially droplet-based method in molecular diagnostics combining amplification and signal read-out techniques, is well-studied during the crucial outbreak of SARS-CoV-2. For example, Yin et al. combined droplet digital and rapid PCR techniques to detect ORF1ab, N, and RNase P genes in SARS-CoV-2 in a microfluidic chip. Notably, the system can identify a positive signal within 115 s, which is more rapid than conventional PCR, suggesting its efficiency for point-of-care detection (Fig. a). Dong et al. , Suo et al. , Chen et al. , and Alteri et al. also applied droplet digital PCR (ddPCR) in microfluidic systems to detect SARS-CoV-2 and achieved impressive research results. To further improve detection speed, Shen et al. realized chip imaging based on ddPCR in just 15 s without applying stitching technology for images, which speeds up the lab-to-application process ddPCR technology. Not only thermal amplification technologies, like PCR, but also isothermal amplification techniques are applied for simplified reaction conditions and rapid response time. Lyu et al. designed a droplet assay SlipChip that is capable of producing droplets of various sizes at high density with a single slipping step and quantifying SARS-CoV-2 nucleic acids via digital LAMP (Fig. b). As a rapidly growing technology, CRISPR can also play an important role in the digital nucleic acid assay for its convenient colorimetric visualization without additional nucleic acid dyes. Combinatorial arrayed reactions for multiplexed evaluation of nucleic acids were developed by Ackerman et al. to detect 169 human-associated viruses, including SARS-CoV-2, in droplets containing CRISPR-Cas13-based nucleic acid detection reagents in a microwell assay (Fig. c). Moreover, isothermal amplification and CRISPR technologies can be utilized in a system to integrate the advantages of both. Park et al. developed a CRISPR/Cas12a-assisted digital assay in commercial microfluidic chips to detect both extracted and heat-inactivated SARS-CoV-2 based on single-step RT-RPA, which outperforms its bulk counterpart with a shorter detection time, higher signal-to-background ratio, wider dynamic range, and better sensitivity (Fig. d). Some descriptions of these examples are shown in Table . The digital nucleic acid assay is developing at high speed in infectious pathogen diagnosis, although some challenges deserve better solutions. First, the generation of partitions, especially droplets, is supposed to be rapid, stable, and uniform, which calls for an efficient and easy-producing method. Methods that depend on complex external pumps and tubes to compartmentalize are doomed to be replaced by convenient methods. Second, adding some surfactants is necessary to stabilize droplets in microfluidic devices, causing additional business costs. Therefore, a less expensive stabilizer or method is required to ensure stabilization in droplet reactions. Third, the measurement of original copies is based on signal read-out technologies, which involve algorithms to identify positive compartments. Program optimization and algorithm innovation are essential processes to achieve fast and accurate results. Our team created a novel Monte Carlo-based statistic modeling for absolute quantification of pathogenic nucleic acids via digital LAMP, the results of which agree with the proposed mathematical model . Lastly, digital assay can cooperate with DMF to perform individual and parallel reactions, leaving a huge space for development. Overall, we are expecting integrated and automatic digital nucleic acid assays applied in infectious pathogen diagnosis to conduct sample-to-result testing and POCT. Microfluidic POCT devices exhibit many advantages in in vitro molecular diagnostics, especially in developing areas. Compared with laboratory testing, the operations of microfluidic POCT devices are integrated into a single microfluidic chip, cartridge, and tube, from sample purification-to-nucleic acid amplification and pathogen measurement, while results are easy and rapid to achieve at a comparatively low financial cost . More and more interest has been drawn to microfluidic POCT devices from worldwide manufacturers because of the automatic tests and limited required reagents . Therefore, microfluidic POCT devices show a bright future to molecular diagnostics in urgent or daily situations and deserve further study. Herein we present some typical and current commercial microfluidic POCT devices for molecular diagnostics to show the current state of development. Biological manufacturers have developed commercial devices to be applied in various fields, such as food security, agricultural product testing, medical diagnosis, animal industry, and environmental testing . Among these fields, medical diagnosis, especially molecular diagnostics, is of the greatest relevance to mankind, making the application much more popular. Using SARS-CoV-2 as an example and since the COVID-19 outbreak, microfluidic devices or newly-designed chips targeting the virus have been launched, such as FilmArray ® Biofire ® [Biofire (Salt Lake City, UT, U.S.)] , GenPlex ® [BOHUI (Beijing, China)] , Vivalytic [BOSCH (Waiblingen, BW, German); Randox (Antrim, N.IRE, UK)] , RTisochip ™ -A (CapitalBio) , RTisochip ™ -W (CapitalBio) , DxLab-2A (CapitalBio) , Cue ™ [Cue health (San Diego, CA, U.S.)] , Simplexa ™ [Focus Diagnostics (Cypress, CA, U.S.)] , QuanPLEX [IntelliBio (Qingdao, SD, China)] , a microchip based real-time PCR analyzer AriaDNA [Lumex Instruments (Mission, BC, Canada)] , Novodiag ® [Mobidiag (Espoo, Finland)] , Cobas ® Liat ® [Roche (Indianapolis, IN, U.S.)] , iGeneTec MA3000 [Superchip technology (Shanghai, China)] , BINAS [Tsinghua University (Beijing, China)]) , Visby Medical ™ [Visby Medical (San Jose, CA, U.S.)] and WizDx ™ F-150 Real-time PCR Systems [Wizbiosolutions (Seongnam-si, Republic of Korea)] . Most of these devices also have focused on other respiratory viruses, such as influenza, before the epidemic. Sexually transmitted diseases are increasingly gaining attention from the public, leading to the creation of associated equipment. For example, an IO single module system [Binx health (Boston, MA, U.S.)] can detect Chlamydia trachomatis that may cause sexually transmitted diseases in just 30 min and Visby Medical ™ can detect three kinds of sexual pathogens at the same speed, while GenPlex ® and Vivalytic can also test for vaginal pathogens, such as human papilloma virus. As an immunologic challenge in human medicine, the detection of HIV in POCT devices is meaningful, so Abbott launched Alere ™ Q [Abbott (Des Plaines, IL, U.S.)] and matched HIV-1/2 detection chips. The test can be completed in 52 min, requiring only 25 µl of peripheral blood or plasma. Biocartis focus on tumors and launched Idylla ™ [Biocartis (Mechelen, Belgium)] to target clinically-significant test sites of genes, such as BRAF , KRAS , NRAS , and EGFR . Because of the similarity of molecular diagnostic methodologies, many microfluidic devices perform multiplex detection through concurrent testings and even more testing items may be added by product upgrades. For example, iChip-400 and Onestart-1000 from Baicare (Beijing, China), BD MAX ™ from BD (Sparks, MD, U.S.), FilmArray ® Biofire ® from Biofire, GeneXpert ® Infinity Systems from Cepheid (Sunnyvale, CA, U.S.) , Unyvero A50 from Curetis (West Boylston, MA, U.S.), Revogene ® from GenePOC (Quebec, PQ, Canada), ePlex from GenMark (Carlsbad, CA, U.S.), AriaDNA from Lumex Instruments, Novodiag ® from Mobidiag, Verigene ® from Nanosphere (Beverly, MA, U.S.), and Visby Medical ™ from Visby Medical. These devices greatly increase the efficiency of the screening and diagnosis. More details of all the aforementioned devices are shown in Table . Commercial microfluidic POCT devices are attempting to keep up with the modern medical tests conducted at hospitals and laboratories on test sensitivity and specificity. Commercial microfluidic POCT devices are showing exclusive advantages, especially in detecting diversity and degree of integration. In an epidemic, commercial microfluidic devices are of great importance in diagnosis because of efficient technologies, optimized reaction conditions, and convenience without the need for a large-scale laboratory or testing center. Because commercial microfluidic POCT devices were developed during a disease outbreak in a short time by transplanting conventional steps, there is still much room for development with respect to the level of efficiency, automation, integration, sensitivity, specificity, portability, and affordability, thus making the industry in its early stage of application. We are expecting qualified techniques combined with elaborately designed and fabricated chips to get sample-to-result instruments. Infectious diseases are posing problems for public medical systems and attracted much attention from the public and scientists. Microfluidics is one of the best technologies to conduct molecular diagnostics for infectious diseases and has made great achievements, especially during the outbreak of COVID-19. In this review we presented the applications of microfluidics-based strategies for infectious disease detection. In the first part, we systematically discussed the common processes of molecular testing based on microfluidics, including sample preprocessing (silicon-, magnetic-, and porous materials-based strategies), nucleic acid amplification (PCR, isothermal amplification, and CRISPR-based amplification-free methods), and signal reading-out (electrochemical, fluorescence, colorimetric, chemiluminescence, surface plasmon resonance-based, and magnetic-based biosensors). Next, various microfluidic platforms, including LOCC, LOAD, μPADs, and LFA, were compared to highlight the features, advantages, and disadvantages. We further discussed and emphasized the novel applications of the digital nucleic acid assay for absolute quantification. Subsequently, we investigated 27 commercial microfluidics-based POCT devices for molecular diagnostics from a decade ago and displayed the targeting objectives and performances. There is still ample room for the development of microfluidics to deal with the severe, ongoing pandemic. More significantly, new infectious diseases may emerge in the near future. The traditional technologies are mature and optimized, but require multiple steps and frequent transfers of samples between platforms. These sophisticated processes lead to unnecessary pollution and complicated manual operations. Thus, the trend of fully integrated microfluidics is unstoppable, which combines sampling, sensing, and signaling modules. In the sampling module, a large quantity of molecules is expected to be extracted from the limit sample, therefore providing efficient cleavage enzymes, nucleic acid transport carriers, and cleaning agents. In the sensing module, the false-negative results caused by low-sensitive detection usually lead to misdiagnosis and create a burden in the public medical system. Prevention of pandemics calls for high-throughput testing that can precisely detect very few nucleic acids. In practical applications, the diagnostic requirements are so diverse that multiplex diagnostics are more suitable for future tests with expended testing items. In the signaling module, great efforts have been made to accurately identify signals transformed from amplified molecules by algorithms incorporating artificial intelligence, avoiding errors and limitations of manual judgments. There are also some novel strategies still in their early stages; for example, target molecules can be directly tested from sample solutions omitting the pretreatment process. Therefore, nucleic acids must be specifically distinguished between cluttered background molecules, which is challenging. In addition, detecting nucleic acids bypassing sensing modules demands more sensitive testing methods, such as CRISPR, which can respond significantly to individual molecules. Moreover, the industrialization of microfluidics is still in the start-up phase, reflected in complex channel design, expensive substrate materials, necessary optimization of reactions, liquid leakage, valve failure, and difficulty in reproduction and recyclability. These issues are the main barriers for large-scale adoption, so further improvement is needed to build a convenient chip-design platform, combined with material science to find less expensive substrate substitutes, enhance functional modularity, and push automation of chip producing . Fortunately, in dealing with infectious diseases currently and in the future, microfluidic-based molecular diagnostic strategies are indispensable and receiving more attention from frontier scientists. Microfluidics-based molecular diagnostic strategies will become the mainstream of large-scale detection, utilizing rarely required samples, to conduct diagnosis automatically at a low cost. |
Rhizosphere assembly alters along a chronosequence in the Hallstätter glacier forefield (Dachstein, Austria) | b195bbc4-df44-41b3-a215-99c80e922f0d | 10858390 | Microbiology[mh] | The complex interplay between plant hosts and their rhizosphere microbiota is important for plant health (Cordovez et al. , Berg et al. , Trivedi et al. ). Interactions between plants and microorganisms have the potential to significantly contribute to plant adaptation, i.e. mediating host immunity, improving tolerance to environmental stress, facilitating access to new nutrient sources, and supporting resilience when exposed to specific environmental changes. They are collectively known as microbe-mediated adaptation (Petipas et al. ). All these processes are suggested to be influenced by plant genotype and the environment, including the extent of anthropogenic impacts on the ecosystem (Menzel et al. , Kusstatscher et al. , Berg and Cernava , Cosme ). Plants assemble their rhizosphere microbiome by recruiting bacteria from seeds and the surrounding environment (Abdelfattah et al. , Wicaksono et al. ). However, there are still several open questions regarding the complex interactions between plants and microbes, such as how much the plant itself assembles a microbial community from the surrounding soil and how this process is influenced by environmental changes. Glaciers are model ecosystems of special interest due to their global relevance and accelerated retreat in the face of anthropogenic climate change. In glacier forefields, the successional age drives plant species composition, resulting in a gradient of increasing diversity and specificity within plant communities (Fickert et al. , Ficetola et al. ). Moreover, the succession is driven by stochastic and deterministic processes. For plants, it is known that early successional species are rather generalists, and only later during succession, specialist species are found (Büchi and Vuilleumier ). Recently, glacier forefields were used to advance our understanding of the successional development of soil microbiomes (Tscherko et al. , Bardgett et al. ). A study in a glacier forefield in the Austrian Alps showed that the soil microbial community was more closely related to plant communities than to environmental factors, supporting the notion that biotic factors are crucial in the successional assembly of diverse ecosystems (Junker et al. ). In contrast, He et al. ( ) tried to predict plant species composition from microbial composition and did not find a clear correlation between plant and microbiome assembly. Additionally, abiotic factors (i.e. physiochemical and microclimatic spatial variation at the site scale) shape bacterial community assembly during primary colonization (Rolli et al. ). However, the extent to which these factors play a role in the rhizosphere microbiome assembly is not well understood, especially during early succession. Forefields of retreating glaciers provide an ideal setting to study the temporal dimension of rhizosphere microbiome assembly by space-for-time substitution and can provide insights into future shifts of rhizosphere microbiomes that may occur under changing environmental conditions (Bradley et al. , Hotaling et al. ). Here, we investigated the succession of bacterial communities in the rhizosphere of three pioneering plant species in the forefield of the Hallstätter glacier (Austria). We used 16S rRNA gene amplicon sequencing and shotgun metagenomic sequencing to analyse the composition and function of microbiomes associated with Papaver alpinum L., Hornungia alpina (L.) O. Appel, and Sedum atratum L. The main objectives of this study were (i) to identify the adaptation of the functional potential associated with pioneer plant microbiomes during early succession after 10 years of deglaciation and (ii) to characterize bacterial compositional shifts in the soil and rhizosphere of the three pioneer plants where the glacier retreated 10, 70, and 150 years ago. Understanding successional shifts in microbiomes that are emerging in glacier forefields provides key insights into the consequences of future climate change regarding the dynamics of biodiversity and potential ecosystem functions.
Sample collection and DNA extraction Rhizosphere samples of three alpine plant species, P. alpinum, H. alpina , and S. atratum , were collected in the forefield of the Hallstätter glacier (see Fig. – ). We have chosen to sample H. alpina, P. alpinum , and S. atratum , as they were present in all sampling areas. The plant samples were obtained in regions where the glacier receded ∼10, 70, and 150 years ago; these sampling sites were designated as glacier 10 , glacier 70 , and glacier 150 , respectively (Fig. and ). The sampling followed a long-term permanent plot design initiated by Kühn ( ). Rhizosphere samples were taken by lightly shaking the roots to remove loosely attached soil before they were further treated in the laboratory as described below. The time of deglaciation at the locations was adapted from Bruhm et al. ). The mean annual temperature and number of frost-free days at the three sites were obtained using the Climate Downscaling Tool (ClimateDT; https://www.ibbr.cnr.it/climate-dt/ , and ). At the glacier 10 sampling site, three independent biological replicates, each consisting of roots with adhering rhizosphere soil from three plants, were obtained from multiple plots. We used homogenized pooled samples from a separately obtained initial sample ( n = 3 plants per replicate) to acquire a more comprehensive subsample of the microbial community present within the plants. Additionally, bulk soil samples were collected from the area where the glacier receded 10 years ago. However, due to a lack of plants grown in multiple plots at the glacier 70 and glacier 150 sampling sites, three biological replicates, with each replicate composed of samples from at least three adjacent plants, were taken from a single plot at the glacier 70 and glacier 150 sampling sites. During the sampling event, no bare soil without vegetation could be obtained from the glacier 70 and glacier 150 sampling sites. This was also likely attributed to the presence of gravel and small stones as the main soil constituents at the glacier 70 and glacier 150 plots. Consequently, it was not possible to compare the microbial data from bulk soil and rhizosphere samples at glacier 70 and glacier 150 sampling sites. To extract DNA from soil and rhizosphere samples, 5 g of plant roots with adhering rhizosphere soil was added to 20 ml of sterile 0.85% NaCl, agitated by hand, and vortexed for 3 min. Samples from aliquots of 2 ml of the obtained suspensions were centrifuged for 20 min at 16 000 × g and 4°C in a DuPont Instruments Sorvall RC-5B Refrigerated Superspeed Centrifuge (USA). The resulting pellets were weighed (∼0.1 g) and stored at −20°C until DNA extraction. Total DNA was extracted using the FastDNA Spin Kit for Soil (MP Biomedicals, USA) following the manufacturer’s protocol. Briefly, the pellets were placed in a Lysing Matrix E tube (supplied with the FastDNA™ Spin Kit for Soil) and further processed to lyse microbial cells. The extracted DNA was then purified by a silica-based spin filter method. Amplicon sequencing of 16S rRNA genes and shotgun metagenomic sequencing of total community DNA To investigate potential bacterial functions that may play a role during early succession, we performed shotgun metagenomic sequencing with samples from the sampling sites where the glacier receded 10 years ago (glacier 10 site, Fig. ). The extracted DNA was sent to the sequencing provider Genewiz (Leipzig, Germany). The DNA library preparations and sequencing reactions were performed by the sequencing provider. The DNA sequencing library was prepared using the NEB NextUltra DNA Library Preparation Kit (NEB, UK) according to the guidelines provided by the manufacturer. In brief, the genomic DNA was fragmented using the Covaris S220 instrument and was subjected to end repair and adenylation. Adapters were then ligated following adenylation of the 3′ ends. The adapter-ligated DNA was indexed and enriched by performing limited-cycle polymerase chain reaction (PCR). The DNA sequencing library was then sequenced using an Illumina HiSeq 2500 system and 2 × 150 bp paired-end sequencing. For all sampling sites, total DNA was subjected to amplicon PCRs to target the whole prokaryotic community (archaea and bacteria, Fig. ). We used the 515f/806r primer set to amplify the V4 region of prokaryotic 16S rRNA genes (Caporaso et al. ). For demultiplexing, we added sample-specific barcodes to each primer. The barcodes utilized in this study were recommended by the Earth Microbiome Project ( http://www.earthmicrobiome.org/ ). The PCR reaction (25 µl) contained 1 × Taq&Go (MP Biomedicals, Illkirch, France), 0.25 mM of each primer, and 1 µl template DNA. In order to verify the success of the amplification, the PCR products were loaded onto a 1% agarose gel and subjected to gel electrophoresis at 140 V for 60 min. The products were then purified using the Wizard ® SV Gel and PCR Clean-Up Kit (Promega, Madison, USA). Subsequently, the DNA concentration of the purified barcoded samples was measured using the Qubit dsDNA BR Assay (Thermo Fischer Scientific) and combined in equal amounts (∼500 ng per sample). The pooled library was sent to the sequencing provider Genewiz (Leipzig, Germany), and the sequencing libraries were prepared using the Nextera XT Index Kit from Illumina. The sequencing libraries were then sequenced using an Illumina MiSeq (v2 reaction kit) instrument with 2 × 300 bp paired-end sequencing. Assembly-based metagenomic analyses Unless otherwise specified, all software were run with the default settings. We used Trimmomatic and VSEARCH to remove Illumina sequencing adaptors and perform preliminary quality filtering on metagenomic reads (removal of low-quality reads; Phred 20). The metagenomic reads were assembled using the Megahit assembler (Li et al. ). Only contigs with a length >1 kb were kept for further analysis. The annotation of assembled contigs was conducted using the metagenome classifier Kraken2 (Wood et al. ). Open reading frames were predicted using Prodigal v2.6.3 (Hyatt et al. ). To remove redundant sequences, we used CD-HIT-EST v4.8.1 to cluster protein-coding gene sequences into a nonredundant gene catalogue using a nucleotide identity of 95% similarity (Li and Godzik ). The nonredundant genes were annotated using the blast algorithm in DIAMOND combined with eggNOG-mapper (Buchfink et al. , Huerta-Cepas et al. ) and the eggNOG database v5.0 (Huerta-Cepas et al. ). We also used eggNOG-mapper to obtain taxonomical assignment for each protein-coding gene. All protein-coding gene sequences that were assigned to Bacteria based on the eggNOG-mapper taxonomic classification and with retrievable KEGG Orthology (KO) annotations were kept for further analyses. To generate gene profiles from the samples, we back-mapped quality-filtered reads to the generated nonredundant gene catalogue using BWA and SamTools (Li et al. , Li and Durbin ). This step yielded >700 M reads that were classified as bacterial proteins according to the eggNOG mapper. Reconstruction of bacterial metagenome-assembled genomes We used multiple binning methods, i.e. Maxbin2 v2.2.7, MetaBAT2 v2.12.1, and CONCOCT v1.1.0 (Alneberg et al. , Wu et al. , Kang et al. ), to construct metagenome-assembled genomes (MAGs). The MAGs with the highest quality among all genome binners were selected using DASTool v1.1.1 (Sieber et al. ). Additional binning using Vamb (Nissen et al. ) and SemiBin (Pan et al. ) was performed using multisample binning approaches by concatenating individual assembled contigs from all samples. The quality of MAGs (completeness and the percentage of contamination) were calculated using CheckM v1.0.13 (Parks et al. ). Because we want to compare the metabolic capabilities of different MAGs, only medium-quality MAGs with a completeness >50% and contamination levels <10% according to the current definition of the minimum information MAG standards (Bowers et al. ) were kept for further analyses. MAGs were dereplicated using dRep v2.2.3 (Olm et al. ) to obtain a nonredundant metagenome-assembled bacterial genome set. We used the Genome Taxonomy Database Toolkit to obtain taxonomical information for each MAG and phylogenetic trees were constructed using PhyloPhlAn (Asnicar et al. ) by including closely related taxa from the PhyloPhlAn database. Abundance profiles of each MAG were calculated by using CoverM v0.4.0 ( https://github.com/wwood/CoverM ) with the option -m rpkm. MAG abundance was calculated as mapped reads per kilobase per million reads divided by the MAG length and total number of reads in each metagenomic dataset (in millions of reads). Gene annotations of constructed MAGs were performed using DRAM v.1.4.6 (Distilled and Refined Annotation of Metabolism) (Shaffer et al. ). Bacterial community structure and diversity analysis To analyse the amplicon sequencing dataset, QIIME2 version 2019.10 was used ( https://qiime2.org ) (Bolyen et al. ). Raw reads were demultiplexed and primer sequences were removed using the cutadapt tool (Martin ) before importing the data into QIIME2 with the script ‘qiime tools import’. The demultiplexed reads were subjected to quality filtering, denoising, and chimeric sequence removal using the DADA2 algorithm (Callahan et al. ). The latter step generated the amplicon sequence variants (ASVs) table, which records the number of times each exact ASV was observed per sample. The output sequences were subsequently aligned against the reference database Silva v132 (Pruesse et al. ) using the VSEARCH classifier (Rognes et al. ) to obtain taxonomical information of each ASV. In the Silva database, the bacterial class Betaproteobacteria was reclassified to the order-level Betaproteobacteriales within the bacterial class Gammaproteobacteria . Prior to further analyses, only reads assigned to Bacteria were retained. Reads assigned to plastids and mitochondria were removed. Negative control used for PCRs produced a minimal number of reads (10 reads—3 ASVs). We eliminated any overlapping ASVs derived from negative controls and excluded the negative control from the datasets. The amplicon sequencing approach resulted in a total of 1 259 583 bacterial reads (min = 4046 and max = 174 837, ), which were assigned to a total of 8310 bacterial ASVs. Statistical analysis Bacterial community diversity and composition were analysed in R v4.1.2 using the R packages Phyloseq v1.38.0 and vegan v2.6–4 (Oksanen et al. , R Core Team , McMurdie and Holmes ). For alpha diversity analysis, the bacterial abundance table was normalized by subsampling to the lowest number of reads among the samples (4046 reads). The majority of the rarefaction curves obtained for each sample approached the saturation plateau, indicating that the sampling size was sufficient to capture overall bacterial diversity ( ). We estimated alpha diversity using the Shannon index and determined the significance of observed differences using the nonparametric (rank-based) Kruskal–Wallis test, which was followed by a pairwise Wilcox test corrected for multiple comparisons. MetagenomeSeq’s cumulative sum scaling (CSS) (Paulson et al. ) was used for subsequent beta diversity analyses. Beta diversity analysis was performed using a CSS-normalized Bray–Curtis dissimilarity matrix. The dissimilarity matrix was subjected to Adonis analysis to test for significant effects between the different plant species and different regions where the glacier receded. Pairwise Adonis test for multiple comparisons was performed using the pairwiseAdonis v0.4 custom script ( https://github.com/pmartinezarbizu/pairwiseAdonis ). To investigate the plant specificity of microbial communities, we calculated Spearman correlation coefficients by plotting microbial community dissimilarity between all plant species and different successional ages (10, 70, and 150 years). The relative contribution of deterministic and stochastic processes on bacterial assembly was estimated using the normalized stochasticity ratio (NST) (Ning et al. ). Following the randomization of the metacommunity, the NST index was generated using the observed dissimilarity between communities and the randomly expected dissimilarity between communities. The NST index distinguishes between stochastic (>50%) and deterministic (<50%) assemblies. Lastly, linear discriminant analysis and effect size estimation were implemented using LefSe (Segata et al. ) to identify bacterial taxa that were enriched in glacier 10 , glacier 70 , and glacier 150 samples, respectively. Identification of enriched ASVs in a global catalogue of microorganisms from various cryospheric ecosytems We aimed to understand the origins of ASVs that were enriched in glacier 10 , glacier 70 , and glacier 150 samples. We used a large-scale dataset of the cryosphere (Bourquin et al. ) for a deeper analysis to explore whether cryophilic glacier microbes contribute to the soil and plant microbiome of the glacier forefield in our study. Bourquin et al. ( ) generated a global inventory of the microbiome from snow, ice, permafrost soils, and coastal as well as freshwater ecosystems under glacier influence by analysing amplicon sequencing data generated with the same primers as used in our study, 515f-806r targeting prokaryotic 16S rRNA genes. Therefore, using our data, we aligned all the ASVs that were enriched in glacier 10 , glacier 70 , and glacier 150 samples based on the LefSe analysis with representative sequences from the global catalogue of microorganisms from various cryospheric ecosystems (Bourquin et al. ). We assigned ASV matches whether they mapped successfully with 100% coverage and 100% identity against the referred catalogue of 16S rRNA gene–ASVs PP2 ( https://doi.org/10.5281/zenodo.6541278 ).
Rhizosphere samples of three alpine plant species, P. alpinum, H. alpina , and S. atratum , were collected in the forefield of the Hallstätter glacier (see Fig. – ). We have chosen to sample H. alpina, P. alpinum , and S. atratum , as they were present in all sampling areas. The plant samples were obtained in regions where the glacier receded ∼10, 70, and 150 years ago; these sampling sites were designated as glacier 10 , glacier 70 , and glacier 150 , respectively (Fig. and ). The sampling followed a long-term permanent plot design initiated by Kühn ( ). Rhizosphere samples were taken by lightly shaking the roots to remove loosely attached soil before they were further treated in the laboratory as described below. The time of deglaciation at the locations was adapted from Bruhm et al. ). The mean annual temperature and number of frost-free days at the three sites were obtained using the Climate Downscaling Tool (ClimateDT; https://www.ibbr.cnr.it/climate-dt/ , and ). At the glacier 10 sampling site, three independent biological replicates, each consisting of roots with adhering rhizosphere soil from three plants, were obtained from multiple plots. We used homogenized pooled samples from a separately obtained initial sample ( n = 3 plants per replicate) to acquire a more comprehensive subsample of the microbial community present within the plants. Additionally, bulk soil samples were collected from the area where the glacier receded 10 years ago. However, due to a lack of plants grown in multiple plots at the glacier 70 and glacier 150 sampling sites, three biological replicates, with each replicate composed of samples from at least three adjacent plants, were taken from a single plot at the glacier 70 and glacier 150 sampling sites. During the sampling event, no bare soil without vegetation could be obtained from the glacier 70 and glacier 150 sampling sites. This was also likely attributed to the presence of gravel and small stones as the main soil constituents at the glacier 70 and glacier 150 plots. Consequently, it was not possible to compare the microbial data from bulk soil and rhizosphere samples at glacier 70 and glacier 150 sampling sites. To extract DNA from soil and rhizosphere samples, 5 g of plant roots with adhering rhizosphere soil was added to 20 ml of sterile 0.85% NaCl, agitated by hand, and vortexed for 3 min. Samples from aliquots of 2 ml of the obtained suspensions were centrifuged for 20 min at 16 000 × g and 4°C in a DuPont Instruments Sorvall RC-5B Refrigerated Superspeed Centrifuge (USA). The resulting pellets were weighed (∼0.1 g) and stored at −20°C until DNA extraction. Total DNA was extracted using the FastDNA Spin Kit for Soil (MP Biomedicals, USA) following the manufacturer’s protocol. Briefly, the pellets were placed in a Lysing Matrix E tube (supplied with the FastDNA™ Spin Kit for Soil) and further processed to lyse microbial cells. The extracted DNA was then purified by a silica-based spin filter method.
To investigate potential bacterial functions that may play a role during early succession, we performed shotgun metagenomic sequencing with samples from the sampling sites where the glacier receded 10 years ago (glacier 10 site, Fig. ). The extracted DNA was sent to the sequencing provider Genewiz (Leipzig, Germany). The DNA library preparations and sequencing reactions were performed by the sequencing provider. The DNA sequencing library was prepared using the NEB NextUltra DNA Library Preparation Kit (NEB, UK) according to the guidelines provided by the manufacturer. In brief, the genomic DNA was fragmented using the Covaris S220 instrument and was subjected to end repair and adenylation. Adapters were then ligated following adenylation of the 3′ ends. The adapter-ligated DNA was indexed and enriched by performing limited-cycle polymerase chain reaction (PCR). The DNA sequencing library was then sequenced using an Illumina HiSeq 2500 system and 2 × 150 bp paired-end sequencing. For all sampling sites, total DNA was subjected to amplicon PCRs to target the whole prokaryotic community (archaea and bacteria, Fig. ). We used the 515f/806r primer set to amplify the V4 region of prokaryotic 16S rRNA genes (Caporaso et al. ). For demultiplexing, we added sample-specific barcodes to each primer. The barcodes utilized in this study were recommended by the Earth Microbiome Project ( http://www.earthmicrobiome.org/ ). The PCR reaction (25 µl) contained 1 × Taq&Go (MP Biomedicals, Illkirch, France), 0.25 mM of each primer, and 1 µl template DNA. In order to verify the success of the amplification, the PCR products were loaded onto a 1% agarose gel and subjected to gel electrophoresis at 140 V for 60 min. The products were then purified using the Wizard ® SV Gel and PCR Clean-Up Kit (Promega, Madison, USA). Subsequently, the DNA concentration of the purified barcoded samples was measured using the Qubit dsDNA BR Assay (Thermo Fischer Scientific) and combined in equal amounts (∼500 ng per sample). The pooled library was sent to the sequencing provider Genewiz (Leipzig, Germany), and the sequencing libraries were prepared using the Nextera XT Index Kit from Illumina. The sequencing libraries were then sequenced using an Illumina MiSeq (v2 reaction kit) instrument with 2 × 300 bp paired-end sequencing.
Unless otherwise specified, all software were run with the default settings. We used Trimmomatic and VSEARCH to remove Illumina sequencing adaptors and perform preliminary quality filtering on metagenomic reads (removal of low-quality reads; Phred 20). The metagenomic reads were assembled using the Megahit assembler (Li et al. ). Only contigs with a length >1 kb were kept for further analysis. The annotation of assembled contigs was conducted using the metagenome classifier Kraken2 (Wood et al. ). Open reading frames were predicted using Prodigal v2.6.3 (Hyatt et al. ). To remove redundant sequences, we used CD-HIT-EST v4.8.1 to cluster protein-coding gene sequences into a nonredundant gene catalogue using a nucleotide identity of 95% similarity (Li and Godzik ). The nonredundant genes were annotated using the blast algorithm in DIAMOND combined with eggNOG-mapper (Buchfink et al. , Huerta-Cepas et al. ) and the eggNOG database v5.0 (Huerta-Cepas et al. ). We also used eggNOG-mapper to obtain taxonomical assignment for each protein-coding gene. All protein-coding gene sequences that were assigned to Bacteria based on the eggNOG-mapper taxonomic classification and with retrievable KEGG Orthology (KO) annotations were kept for further analyses. To generate gene profiles from the samples, we back-mapped quality-filtered reads to the generated nonredundant gene catalogue using BWA and SamTools (Li et al. , Li and Durbin ). This step yielded >700 M reads that were classified as bacterial proteins according to the eggNOG mapper.
We used multiple binning methods, i.e. Maxbin2 v2.2.7, MetaBAT2 v2.12.1, and CONCOCT v1.1.0 (Alneberg et al. , Wu et al. , Kang et al. ), to construct metagenome-assembled genomes (MAGs). The MAGs with the highest quality among all genome binners were selected using DASTool v1.1.1 (Sieber et al. ). Additional binning using Vamb (Nissen et al. ) and SemiBin (Pan et al. ) was performed using multisample binning approaches by concatenating individual assembled contigs from all samples. The quality of MAGs (completeness and the percentage of contamination) were calculated using CheckM v1.0.13 (Parks et al. ). Because we want to compare the metabolic capabilities of different MAGs, only medium-quality MAGs with a completeness >50% and contamination levels <10% according to the current definition of the minimum information MAG standards (Bowers et al. ) were kept for further analyses. MAGs were dereplicated using dRep v2.2.3 (Olm et al. ) to obtain a nonredundant metagenome-assembled bacterial genome set. We used the Genome Taxonomy Database Toolkit to obtain taxonomical information for each MAG and phylogenetic trees were constructed using PhyloPhlAn (Asnicar et al. ) by including closely related taxa from the PhyloPhlAn database. Abundance profiles of each MAG were calculated by using CoverM v0.4.0 ( https://github.com/wwood/CoverM ) with the option -m rpkm. MAG abundance was calculated as mapped reads per kilobase per million reads divided by the MAG length and total number of reads in each metagenomic dataset (in millions of reads). Gene annotations of constructed MAGs were performed using DRAM v.1.4.6 (Distilled and Refined Annotation of Metabolism) (Shaffer et al. ).
To analyse the amplicon sequencing dataset, QIIME2 version 2019.10 was used ( https://qiime2.org ) (Bolyen et al. ). Raw reads were demultiplexed and primer sequences were removed using the cutadapt tool (Martin ) before importing the data into QIIME2 with the script ‘qiime tools import’. The demultiplexed reads were subjected to quality filtering, denoising, and chimeric sequence removal using the DADA2 algorithm (Callahan et al. ). The latter step generated the amplicon sequence variants (ASVs) table, which records the number of times each exact ASV was observed per sample. The output sequences were subsequently aligned against the reference database Silva v132 (Pruesse et al. ) using the VSEARCH classifier (Rognes et al. ) to obtain taxonomical information of each ASV. In the Silva database, the bacterial class Betaproteobacteria was reclassified to the order-level Betaproteobacteriales within the bacterial class Gammaproteobacteria . Prior to further analyses, only reads assigned to Bacteria were retained. Reads assigned to plastids and mitochondria were removed. Negative control used for PCRs produced a minimal number of reads (10 reads—3 ASVs). We eliminated any overlapping ASVs derived from negative controls and excluded the negative control from the datasets. The amplicon sequencing approach resulted in a total of 1 259 583 bacterial reads (min = 4046 and max = 174 837, ), which were assigned to a total of 8310 bacterial ASVs.
Bacterial community diversity and composition were analysed in R v4.1.2 using the R packages Phyloseq v1.38.0 and vegan v2.6–4 (Oksanen et al. , R Core Team , McMurdie and Holmes ). For alpha diversity analysis, the bacterial abundance table was normalized by subsampling to the lowest number of reads among the samples (4046 reads). The majority of the rarefaction curves obtained for each sample approached the saturation plateau, indicating that the sampling size was sufficient to capture overall bacterial diversity ( ). We estimated alpha diversity using the Shannon index and determined the significance of observed differences using the nonparametric (rank-based) Kruskal–Wallis test, which was followed by a pairwise Wilcox test corrected for multiple comparisons. MetagenomeSeq’s cumulative sum scaling (CSS) (Paulson et al. ) was used for subsequent beta diversity analyses. Beta diversity analysis was performed using a CSS-normalized Bray–Curtis dissimilarity matrix. The dissimilarity matrix was subjected to Adonis analysis to test for significant effects between the different plant species and different regions where the glacier receded. Pairwise Adonis test for multiple comparisons was performed using the pairwiseAdonis v0.4 custom script ( https://github.com/pmartinezarbizu/pairwiseAdonis ). To investigate the plant specificity of microbial communities, we calculated Spearman correlation coefficients by plotting microbial community dissimilarity between all plant species and different successional ages (10, 70, and 150 years). The relative contribution of deterministic and stochastic processes on bacterial assembly was estimated using the normalized stochasticity ratio (NST) (Ning et al. ). Following the randomization of the metacommunity, the NST index was generated using the observed dissimilarity between communities and the randomly expected dissimilarity between communities. The NST index distinguishes between stochastic (>50%) and deterministic (<50%) assemblies. Lastly, linear discriminant analysis and effect size estimation were implemented using LefSe (Segata et al. ) to identify bacterial taxa that were enriched in glacier 10 , glacier 70 , and glacier 150 samples, respectively.
We aimed to understand the origins of ASVs that were enriched in glacier 10 , glacier 70 , and glacier 150 samples. We used a large-scale dataset of the cryosphere (Bourquin et al. ) for a deeper analysis to explore whether cryophilic glacier microbes contribute to the soil and plant microbiome of the glacier forefield in our study. Bourquin et al. ( ) generated a global inventory of the microbiome from snow, ice, permafrost soils, and coastal as well as freshwater ecosystems under glacier influence by analysing amplicon sequencing data generated with the same primers as used in our study, 515f-806r targeting prokaryotic 16S rRNA genes. Therefore, using our data, we aligned all the ASVs that were enriched in glacier 10 , glacier 70 , and glacier 150 samples based on the LefSe analysis with representative sequences from the global catalogue of microorganisms from various cryospheric ecosystems (Bourquin et al. ). We assigned ASV matches whether they mapped successfully with 100% coverage and 100% identity against the referred catalogue of 16S rRNA gene–ASVs PP2 ( https://doi.org/10.5281/zenodo.6541278 ).
Genome-centric analysis revealed the presence of bacterial key genes for nutrient uptake that can support host plants as well as stress response during early succession Shotgun metagenome analysis from bulk soil and rhizosphere samples that were collected from area plots the glacier receded 10 years ago allowed us to identify taxa and functions that were enriched in the plant rhizosphere. A gene-centric approach identified a total of 6321 KOs with a maximum relative abundance of 0.46% and a median relative abundance of 0.003% of total mapped reads. We identified genes that might be important for bacteria to survive during early succession. For instance, genes related to manganese and iron transport systems were consistently detected in the metagenome samples (average relative abundance 0.09% of total mapped reads). Cluster genes encode the branched-chain amino acid transporters, livFGHKM , which are responsible for the transport of extracellular branched-chain amino acids were detected in high abundance (relative abundance 1.15%). Genes that are associated with chemolithotrophic pathways, i.e. sulfite dehydrogenase and Ni/Fe-hydrogenase were detected (relative abundance 0.04%). A gene encoding for nitrogen fixation ( nifU ) was also recovered from all samples (relative abundance 0.02%). Microbial potential for solubilization and utilization of inorganic phosphate was detected due to the occurrence of genes encoding alkaline phosphatase ( phoA, phoB , and phoD ) and inorganic pyrophosphatase ( ppa ). Moreover, we detected genes involved in the production of cold shock proteins (relative abundance 0.89%) and chitinase (relative abundance 0.03%). We further constructed MAGs to compare functional potentials across phylogenetic lineages (Fig. ). The shotgun metagenomic data yielded a total of 54 bacterial MAGs with a completeness above 50% and contamination levels below 10% ( ). Among them, six MAGs were considered to represent high-quality genomes (completeness >90% and contamination levels <5%). Most of the MAGs were assigned to Burkholderiales, Pseudomonadales, Sphingomonadales ( Proteobacteria ), Solirubrobacterales, Actinomycetales , and Mycobacteriales ( Actinobacteriota ). MAGs that were assigned to the bacterial orders Pseudomonadales, Steroidobacterales, Actinomycetales, Mycobacteriales , and SG8-23 carried the nifU gene that encodes a nitrogen fixation protein. More than half of MAGs carried phoD , encoding an alkaline phosphatase. This gene could also be detected within different bacterial orders such as Burkholderiales, Pseudomonadales, Sphingomonadales, Actinomycetales , and Mycobacteriales . Several genes that encode proteins related to nutrient uptake, i.e. multiple sugar transport system ( ABC.MS.P ), urea transport system ( urtC ), and iron complex transport system ( ABC.FEV.P ) were found. Among the MAGs, we detected high occurrences of genes encoding a cold shock protein ( cspA ), an exopolysaccharide production protein ( exoQ ), and a spermidine/putrescine transport system ( ABC.SP.P ) that are likely related to the adaptability and stress response of bacteria in the Hallstätter glacier. Similar bacterial functions were detected in the rhizosphere of different plants and soil samples during early succession We detected a highly similar gene profile between rhizosphere samples of different plant species. Adonis analysis indicated no difference in bacterial gene functional profiles between different plant species that grew where the glacier receded 10 years ago (Adonis— P = 0.298). When the bulk soil samples were included, the gene profiles of the rhizosphere samples were significantly different in comparison to them (Adonis— P = 0.028, R 2 = 43%). However, pairwise analysis only indicated a certain tendency for the presence of different gene profiles between bulk soil and rhizosphere of P. alpinum and H. alpina ( P = 0.100), while no difference was observed when compared to S. atratum samples ( P = 0.200). Overall, only minor differences in gene profiles between the rhizospheres of different plants and the bulk soil samples were observed during the early succession represented by glacier 10 samples. Pairwise comparisons between rhizosphere samples from different plants and bulk soil samples suggested only minor differences in microbial functioning during the early succession event. Of the 6321 detected KO, a closer look at the differentially abundant functions identified only a small number of KOs that were enriched (n KO ) in the rhizosphere samples of H. alpine ( n KO = 23), S. atratum ( n KO = 26), and P. alpinum ( n KO = 1) when compared to bulk soil samples (LefSe— P < 0.05, LDA score > 2). Therein, we detected genes that encode glutamate synthase ( GLU ), branched-chain amino acid transport systems ( livK and livM ), serine/threonine-protein kinase ( prkC ), malate dehydrogenase ( maeB ), and heavy metal efflux transporter ( czcA ). Shifts in rhizosphere bacterial richness and community structure during primary colonization of the Hallstätter glacier forefield Using the amplicon sequencing dataset, we explored microbial succession in the rhizosphere of the three plant species, H. alpina, P. alpinum , and S. atratum along the deglaciation chronosequence. Amplicon sequencing resulted in a congruent result, as observed from shotgun metagenome data. Bulk soil samples clustered together with rhizosphere samples from the glacier front (glacier 10 , ), indicating high similarity between these samples. To investigate the impact of deglaciation and host plants on the bacterial richness and community structures, soil samples were excluded from the analysis. Deglaciation affected bacterial richness (Kruskal–Wallis test— P = 0.021, Fig. ) but not bacterial diversity in the rhizosphere ( P = 0.120, Fig. ). A higher bacterial richness ( n ASV = 875) was found for glacier 10 in comparison to other regions (glacier 70– n ASV = 556; glacier 150– n ASV = 544). In contrast, a significant difference in bacterial richness and diversity was not observed when plant species were used as factors (Kruskal–Wallis test— P = 0.701 and P = 0.697, Fig. and D). Shifts in the bacterial community composition in the rhizosphere as a response to deglaciation were observed. The constructed PcoA plot indicated that samples were clustered according to the deglaciation period. All samples that were obtained from bulk soil near the glacier front (glacier 10 ) showed a tendency to cluster together (Fig. ). Adonis analysis indicated that deglaciation contributed significantly to bacterial community variations ( P = 0.001, R 2 = 28%). Furthermore, according to the PcoA plot, the rhizosphere microbiome of P. alpinum from glacier 70 and glacier 150 samples clustered together. A similar pattern was observed for the rhizosphere microbiome of S. atratum , whereas the rhizosphere microbiomes of H. alpina obtained from the glacier 70 and glacier 150 locations ordinated away from each other. Plant species also contributed significantly to the bacterial community variations but to a lesser degree (Adonis— P = 0.001, R 2 = 14%). When analysed separately for each deglaciation region, we did not observe a significant difference in rhizosphere bacterial community composition between the different plant species that were obtained from glacier 10 (Adonis— P = 0.366, R 2 = 26%). This result reflects the nonsignificant differences in bacterial functional profiles in the rhizosphere of different plant species that grew within the glacier 10 site, as described previously. However, significant differences in bacterial community composition between plant species were observed in the samples obtained from glacier 70 (Adonis— P = 0.007, R 2 = 60%) and glacier 150 (Adonis— P = 0.005, R 2 = 61%). When calculating bacterial community dissimilarity between different plant species, we observed a higher bacterial community dissimilarity between different plant species at the later stages of the succession rather than the early stage. Additionally, Spearman’s correlation analysis indicated that bacterial community dissimilarity between different plant species was positively correlated to successional age ( P = 0.009, r = 0.35; ). Moreover, our results showed that bacterial community assembly was more stochastic at glacier 10 (NST = 64%) compared to glacier 70 (NST = 51%—Wilcoxon test P < 0.001) and glacier 150 (NST = 47%—Wilcoxon test P < 0.001) ( ). Taken together, these results suggest that bacterial communities were potentially selected by the plant species at later stages of the succession (i.e. glacier 70 and glacier 150 ). Gammaproteobacteria, Alphaproteobacteria, Bacteroidia , and Actinobacteria were identified as the most abundant bacterial classes, which contributed to 19.6%, 15.0%, 9.0%, and 6.6% of the total relative abundance, respectively (Fig. ). We did not observe gradual changes in the relative abundance of the two dominant bacterial classes, i.e. Gammaproteobacteria and Alphaproteobacteria for different deglaciation periods. Gammaproteobacteria were the dominant bacterial class in H. alpina (22.7% and 26.5%) and S. atratum (20.3% and 23.7%) in glacier 10 and glacier 150 samples, respectively (Fig. ). Interestingly, the relative abundance of Actinobacteria (4.0%—glacier 10 , 11.6%—glacier 70 , 3.2%—glacier 150 ) in the rhizosphere of H. alpina and the relative abundance of Blastocatellia (3.9%—glacier 10 , 9.5%—glacier 70 , 3.9%—glacier 150 ) in the rhizosphere of S. atratum showed an opposite pattern. The relative abundance of Actinobacteria was relatively low in the bulk soil obtained from glacier 10. Relative abundances of Alphaproteobacteria were relatively stable for the different deglaciation periods ( H. alpina– 14.1%–19.7%, P. alpinum –12.2%–15.1%, and S. atratum –16.3%–17.5%). Relative abundances of Bacteroidia were higher in the rhizosphere of all plant species as well as bulk soil samples that were collected in the area of early succession, where the glacier receded 10 years ago, when compared to other areas. For instance, the relative abundance of Bacteroidia was lower in the rhizosphere of P. alpinum collected in the region where the glacier receded 70 and 150 years ago (6.6% and 6.1%, respectively, Fig. ), compared to the region where the glacier receded 10 years ago (11.3%). The same pattern was observed for the rhizosphere bacterial community of S. atratum (Fig. ). We performed differential abundance analysis at the ASV level using LefSe and identified 212 bacterial ASVs that were differentially abundant between the sites where the glacier receded 10, 70, and 150 years ago. Of these ASVs, the relative abundance of 164 ASVs decreased in glacier 70 and glacier 150 samples in comparison to glacier 10 samples. The majority of these ASVs ( n = 112) were undetectable in the rhizosphere of all plant species that grew at the glacier 70 and glacier 150 locations (Fig. and C). Most of these ASVs belonged to the bacterial orders Betaproteobacteriales ( n = 22), Chitinophagales ( n = 19), Rhizobiales ( n = 10), Gemmatales ( n = 8), and Blastocatellales ( n = 7) (Fig. ). The ASVs enriched in the rhizosphere of all plant species that grew at the glacier 10 were also found in bulk soil collected from the glacier 10 location (Fig. ), indicating that the surrounding soil was the main reservoir of bacteria that colonized the rhizosphere of all plant species that grew at the glacier 10 . Interestingly, a total of 67 ASVs that were enriched in the sites where the glacier receded 10 years ago had a high similarity (100% identity and 100% sequence coverage) with ASVs found in various cryospheric ecosystems (Fig. ). In contrast, only 24 ASVs were enriched in glacier 70 as well as glacier 150 samples, respectively. These ASVs belonged to Betaproteobacteriales ( n = 9), Solirubrobacterales ( n = 4), Pseudonocardiales ( n = 2), Chitinophagales ( n = 2), and Xanthomonadales ( n = 2). These results indicate that ASVs that were enriched in the sites where the glacier receded 10 years ago likely originated from the glacier.
Shotgun metagenome analysis from bulk soil and rhizosphere samples that were collected from area plots the glacier receded 10 years ago allowed us to identify taxa and functions that were enriched in the plant rhizosphere. A gene-centric approach identified a total of 6321 KOs with a maximum relative abundance of 0.46% and a median relative abundance of 0.003% of total mapped reads. We identified genes that might be important for bacteria to survive during early succession. For instance, genes related to manganese and iron transport systems were consistently detected in the metagenome samples (average relative abundance 0.09% of total mapped reads). Cluster genes encode the branched-chain amino acid transporters, livFGHKM , which are responsible for the transport of extracellular branched-chain amino acids were detected in high abundance (relative abundance 1.15%). Genes that are associated with chemolithotrophic pathways, i.e. sulfite dehydrogenase and Ni/Fe-hydrogenase were detected (relative abundance 0.04%). A gene encoding for nitrogen fixation ( nifU ) was also recovered from all samples (relative abundance 0.02%). Microbial potential for solubilization and utilization of inorganic phosphate was detected due to the occurrence of genes encoding alkaline phosphatase ( phoA, phoB , and phoD ) and inorganic pyrophosphatase ( ppa ). Moreover, we detected genes involved in the production of cold shock proteins (relative abundance 0.89%) and chitinase (relative abundance 0.03%). We further constructed MAGs to compare functional potentials across phylogenetic lineages (Fig. ). The shotgun metagenomic data yielded a total of 54 bacterial MAGs with a completeness above 50% and contamination levels below 10% ( ). Among them, six MAGs were considered to represent high-quality genomes (completeness >90% and contamination levels <5%). Most of the MAGs were assigned to Burkholderiales, Pseudomonadales, Sphingomonadales ( Proteobacteria ), Solirubrobacterales, Actinomycetales , and Mycobacteriales ( Actinobacteriota ). MAGs that were assigned to the bacterial orders Pseudomonadales, Steroidobacterales, Actinomycetales, Mycobacteriales , and SG8-23 carried the nifU gene that encodes a nitrogen fixation protein. More than half of MAGs carried phoD , encoding an alkaline phosphatase. This gene could also be detected within different bacterial orders such as Burkholderiales, Pseudomonadales, Sphingomonadales, Actinomycetales , and Mycobacteriales . Several genes that encode proteins related to nutrient uptake, i.e. multiple sugar transport system ( ABC.MS.P ), urea transport system ( urtC ), and iron complex transport system ( ABC.FEV.P ) were found. Among the MAGs, we detected high occurrences of genes encoding a cold shock protein ( cspA ), an exopolysaccharide production protein ( exoQ ), and a spermidine/putrescine transport system ( ABC.SP.P ) that are likely related to the adaptability and stress response of bacteria in the Hallstätter glacier.
We detected a highly similar gene profile between rhizosphere samples of different plant species. Adonis analysis indicated no difference in bacterial gene functional profiles between different plant species that grew where the glacier receded 10 years ago (Adonis— P = 0.298). When the bulk soil samples were included, the gene profiles of the rhizosphere samples were significantly different in comparison to them (Adonis— P = 0.028, R 2 = 43%). However, pairwise analysis only indicated a certain tendency for the presence of different gene profiles between bulk soil and rhizosphere of P. alpinum and H. alpina ( P = 0.100), while no difference was observed when compared to S. atratum samples ( P = 0.200). Overall, only minor differences in gene profiles between the rhizospheres of different plants and the bulk soil samples were observed during the early succession represented by glacier 10 samples. Pairwise comparisons between rhizosphere samples from different plants and bulk soil samples suggested only minor differences in microbial functioning during the early succession event. Of the 6321 detected KO, a closer look at the differentially abundant functions identified only a small number of KOs that were enriched (n KO ) in the rhizosphere samples of H. alpine ( n KO = 23), S. atratum ( n KO = 26), and P. alpinum ( n KO = 1) when compared to bulk soil samples (LefSe— P < 0.05, LDA score > 2). Therein, we detected genes that encode glutamate synthase ( GLU ), branched-chain amino acid transport systems ( livK and livM ), serine/threonine-protein kinase ( prkC ), malate dehydrogenase ( maeB ), and heavy metal efflux transporter ( czcA ).
Using the amplicon sequencing dataset, we explored microbial succession in the rhizosphere of the three plant species, H. alpina, P. alpinum , and S. atratum along the deglaciation chronosequence. Amplicon sequencing resulted in a congruent result, as observed from shotgun metagenome data. Bulk soil samples clustered together with rhizosphere samples from the glacier front (glacier 10 , ), indicating high similarity between these samples. To investigate the impact of deglaciation and host plants on the bacterial richness and community structures, soil samples were excluded from the analysis. Deglaciation affected bacterial richness (Kruskal–Wallis test— P = 0.021, Fig. ) but not bacterial diversity in the rhizosphere ( P = 0.120, Fig. ). A higher bacterial richness ( n ASV = 875) was found for glacier 10 in comparison to other regions (glacier 70– n ASV = 556; glacier 150– n ASV = 544). In contrast, a significant difference in bacterial richness and diversity was not observed when plant species were used as factors (Kruskal–Wallis test— P = 0.701 and P = 0.697, Fig. and D). Shifts in the bacterial community composition in the rhizosphere as a response to deglaciation were observed. The constructed PcoA plot indicated that samples were clustered according to the deglaciation period. All samples that were obtained from bulk soil near the glacier front (glacier 10 ) showed a tendency to cluster together (Fig. ). Adonis analysis indicated that deglaciation contributed significantly to bacterial community variations ( P = 0.001, R 2 = 28%). Furthermore, according to the PcoA plot, the rhizosphere microbiome of P. alpinum from glacier 70 and glacier 150 samples clustered together. A similar pattern was observed for the rhizosphere microbiome of S. atratum , whereas the rhizosphere microbiomes of H. alpina obtained from the glacier 70 and glacier 150 locations ordinated away from each other. Plant species also contributed significantly to the bacterial community variations but to a lesser degree (Adonis— P = 0.001, R 2 = 14%). When analysed separately for each deglaciation region, we did not observe a significant difference in rhizosphere bacterial community composition between the different plant species that were obtained from glacier 10 (Adonis— P = 0.366, R 2 = 26%). This result reflects the nonsignificant differences in bacterial functional profiles in the rhizosphere of different plant species that grew within the glacier 10 site, as described previously. However, significant differences in bacterial community composition between plant species were observed in the samples obtained from glacier 70 (Adonis— P = 0.007, R 2 = 60%) and glacier 150 (Adonis— P = 0.005, R 2 = 61%). When calculating bacterial community dissimilarity between different plant species, we observed a higher bacterial community dissimilarity between different plant species at the later stages of the succession rather than the early stage. Additionally, Spearman’s correlation analysis indicated that bacterial community dissimilarity between different plant species was positively correlated to successional age ( P = 0.009, r = 0.35; ). Moreover, our results showed that bacterial community assembly was more stochastic at glacier 10 (NST = 64%) compared to glacier 70 (NST = 51%—Wilcoxon test P < 0.001) and glacier 150 (NST = 47%—Wilcoxon test P < 0.001) ( ). Taken together, these results suggest that bacterial communities were potentially selected by the plant species at later stages of the succession (i.e. glacier 70 and glacier 150 ). Gammaproteobacteria, Alphaproteobacteria, Bacteroidia , and Actinobacteria were identified as the most abundant bacterial classes, which contributed to 19.6%, 15.0%, 9.0%, and 6.6% of the total relative abundance, respectively (Fig. ). We did not observe gradual changes in the relative abundance of the two dominant bacterial classes, i.e. Gammaproteobacteria and Alphaproteobacteria for different deglaciation periods. Gammaproteobacteria were the dominant bacterial class in H. alpina (22.7% and 26.5%) and S. atratum (20.3% and 23.7%) in glacier 10 and glacier 150 samples, respectively (Fig. ). Interestingly, the relative abundance of Actinobacteria (4.0%—glacier 10 , 11.6%—glacier 70 , 3.2%—glacier 150 ) in the rhizosphere of H. alpina and the relative abundance of Blastocatellia (3.9%—glacier 10 , 9.5%—glacier 70 , 3.9%—glacier 150 ) in the rhizosphere of S. atratum showed an opposite pattern. The relative abundance of Actinobacteria was relatively low in the bulk soil obtained from glacier 10. Relative abundances of Alphaproteobacteria were relatively stable for the different deglaciation periods ( H. alpina– 14.1%–19.7%, P. alpinum –12.2%–15.1%, and S. atratum –16.3%–17.5%). Relative abundances of Bacteroidia were higher in the rhizosphere of all plant species as well as bulk soil samples that were collected in the area of early succession, where the glacier receded 10 years ago, when compared to other areas. For instance, the relative abundance of Bacteroidia was lower in the rhizosphere of P. alpinum collected in the region where the glacier receded 70 and 150 years ago (6.6% and 6.1%, respectively, Fig. ), compared to the region where the glacier receded 10 years ago (11.3%). The same pattern was observed for the rhizosphere bacterial community of S. atratum (Fig. ). We performed differential abundance analysis at the ASV level using LefSe and identified 212 bacterial ASVs that were differentially abundant between the sites where the glacier receded 10, 70, and 150 years ago. Of these ASVs, the relative abundance of 164 ASVs decreased in glacier 70 and glacier 150 samples in comparison to glacier 10 samples. The majority of these ASVs ( n = 112) were undetectable in the rhizosphere of all plant species that grew at the glacier 70 and glacier 150 locations (Fig. and C). Most of these ASVs belonged to the bacterial orders Betaproteobacteriales ( n = 22), Chitinophagales ( n = 19), Rhizobiales ( n = 10), Gemmatales ( n = 8), and Blastocatellales ( n = 7) (Fig. ). The ASVs enriched in the rhizosphere of all plant species that grew at the glacier 10 were also found in bulk soil collected from the glacier 10 location (Fig. ), indicating that the surrounding soil was the main reservoir of bacteria that colonized the rhizosphere of all plant species that grew at the glacier 10 . Interestingly, a total of 67 ASVs that were enriched in the sites where the glacier receded 10 years ago had a high similarity (100% identity and 100% sequence coverage) with ASVs found in various cryospheric ecosystems (Fig. ). In contrast, only 24 ASVs were enriched in glacier 70 as well as glacier 150 samples, respectively. These ASVs belonged to Betaproteobacteriales ( n = 9), Solirubrobacterales ( n = 4), Pseudonocardiales ( n = 2), Chitinophagales ( n = 2), and Xanthomonadales ( n = 2). These results indicate that ASVs that were enriched in the sites where the glacier receded 10 years ago likely originated from the glacier.
Our study on plant-associated bacterial communities during early succession in the forefield of the Hallstätter glacierprovides novel insights into the temporal dimension of the assembly of the plant rhizosphere microbiota. We found that bacterial genes encode potential functional adaptations to the glacier environment. The rhizosphere microbiomes of the alpine plants H. alpina, P. alpinum , and S. atratum showed clear differences along the chronosequence. These differences were characterized by decreasing microbial richness but increasing specificity of plant-associated bacterial communities in the rhizosphere. Altogether, the findings indicate that time plays a significant role in the assembly of the rhizosphere bacterial communities across the chronosequence. These bacteria potentially support pioneer plants in the process of colonizing new habitats and their long-term establishment at later stages. When microbial functions were analysed at the glacier 10 site, our data indicated key features related to bacterial adaptation to the glacier forefield. Betaproteobacteria, Gammaproteobacteria, Alphaproteobacteria , and Actinobacteria dominated the forefield of the glacier. Burkholderiales ( Betaproteobacteria ), Sphingomonadales ( Alphaproteobacteria ), Micrococcales , and Mycobacteriales ( Actinobacteria ) were previously found in other regions with low mean annual temperatures, e.g. the Damma glacier (Lapanje et al. ), the glacial region of Sikkim Himalaya (Mukhia et al. ), and the Svalbard glacier (Perini et al. , Tian et al. ). The low temperatures and frequent temperature fluctuations around the freezing point, which can cause cold shock responses in microbial cells, are common in harsh alpine habitats. By coupling short-read-based and genome-centric analyses, we provided evidence that certain bacterial taxa in the deglaciated area are functionally adapted to cold temperatures and limited nutrients due to the occurrence of genes encoding cold shock proteins and nutrient uptake. The presence of genes encoding a particular cold shock protein, i.e. cspA , is crucial to maintain protein homeostasis during cold stress (Xia et al. , Kumar et al. ). Moreover, genes that are involved in the production of exopolysaccharides ( exo genes) and the spermidine/putrescine transport system are important to protect bacteria from abiotic stresses, i.e. drought stress and cold stress (Naseem et al. , Morcillo and Manzanera ), and play important roles in root colonization (Liu et al. ). Microbial exopolysaccharides also cause soil particle aggregation, which is important for soil structure formation and the accumulation of nutrients (Costa et al. ). During early succession and especially in cold regions, soil is dominated by mineral phosphate, which is highly insoluble and not available for plants (Heindel et al. , Ren et al. ). Hence, genes encoding alkaline phosphatases (encoded by phoA, phoB , and phoD ) are likely needed for bacteria to increase phosphate availability under phosphorus-limited conditions. Furthermore, the occurrence of genes encoding multiple sugar, iron, and branched-chain amino acid transport systems may provide a benefit to scavenge and access resources from the surrounding environment. The succession stage in the Hallstätter glacier forefield had a substantial impact on the microbial community in the rhizosphere of pioneer plants. A recent study by Mapelli et al . ( ) examined changes in bacterial diversity in the rhizosphere of a pioneer plant along a High Arctic glacier chronosequence. The authors observed that changes of total nitrogen, total organic carbon, and cation exchange capacity during the developmental stage of the soil strongly affect the bacterial community in the rhizosphere throughout the chronosequence. Interestingly, the bacterial community functions and structure in the rhizosphere did not differ significantly between different plants at the glacier 10 site. During the initial stages of succession, the abiotic factors present in the studied Hallstätter glacier pose challenging conditions for microbial survival. The identified microorganisms exhibit common characteristics that enable them to adapt and endure the adverse environmental conditions. Based on our findings, it can be inferred that during early succession, i.e. the glacier 10 region, different plant species recruit similar microbes from the surrounding soil, which are ubiquitous and well adapted to this particular environment. Moreover, the specific environment in the glacier 10 region, especially due to reduced frost-free days, could also limit the ability of the host plant to shape the bacterial community structure and functioning in the rhizosphere. In this study, from the site closest to the glacier to the older sites, the rhizosphere microbial community showed an increase in host specificity, but a decrease in rhizosphere microbial richness. Interestingly, we observed that several bacterial ASVs present at the glacier 10 were undetectable at the glacier 70 and glacier 150 locations. These taxa may have partially originated from the glacier, as indicated by the high similarity with bacterial sequences from various cryospheric ecosystems, and might therefore be sensitive to changing habitat conditions that occurred at the glacier 70 and glacier 150 locations. Moreover, we argue that host plants only maintain certain taxa that provide ecological services and functional traits necessary for promoting fitness and resilience of the host at later stages of succession. It is widely acknowledged that the plant host is one of the main drivers of rhizosphere microbiome assembly (Berg and Smalla , Hassani et al. ). The selection of the rhizosphere community by host plants is based on functional features related to plant metabolism (Mendes et al. ). After the early succession stage, with nutrients becoming more available and more days without frost, the production of root exudates that select specific rhizosphere bacteria is likely more pronounced . Improved specificity during assembly of rhizosphere microbial communities is also indicated by the observed decrease in stochasticity and was previously described for plant communities. Our findings are in line with a recent study by Hanusch et al. ( ) that suggested environmental filtration and biotic interactions replace stochasticity after 60 years of succession in a glacier forefied. Despite conducting a thorough analysis, this study has certain limitations. These include a relatively limited number of biological replicates and a restricted number of sampling plots where the studied plant species were naturally growing. Moreover, soil chemistry data such as soil pH, water content, and organic matter content during soil development, which could potentially impact bacterial community structures, were not considered in this study. These limitations emphasize the importance of conducting future research with a larger sample size and including these relevant factors in order to validate and confirm the impact of deglaciation on bacterial community structures. In conclusion, we revealed that the Hallstätter glacier is a source of specific, cold-adapted bacterial communities, which are likely diminished during deglaciation. While plant-specific microorganisms facilitate long-term establishment, well-adapted ubiquitous bacteria from surrounding soil may allow pioneer plants to colonize new habitats. This pattern was reflected by a decrease in bacterial richness but an increase in specificity in plant-associated bacterial community in the rhizosphere along the gradient of deglaciation.
fiae005_Supplemental_File Click here for additional data file.
|
Molecular characterization of plant growth-promoting | 11ce8182-c42d-445f-8292-2e3f38816388 | 11457496 | Microbiology[mh] | The agricultural systems of Saudi Arabia have significantly improved during the last 10 years. Despite the common perception that Saudi Arabia is a desert, there are several areas where cultivation is possible. The Kingdom of Saudi Arabia is moving forward with plans (Vision 2030) to develop the agricultural sector because of its direct impact on food security. By doing so, the Kingdom attaches importance to issues of food and water security, agricultural development, and environmental sustainability. According to the Food and Agriculture Organization (2009), the global population will reach 9.7 billion by 2050, and difficulties in meeting human food needs are expected due to the effects of climate change, a shrinking agricultural land area, and the degradation of the environment and natural resources, including the loss of numerous biodiversity components that are crucial to achieving sustainable agricultural production . Research on fungal biodiversity within the genus Trichoderma and characterization of plant growth-promoting fungi is lacking in this region. Salinity and temperature are important factors in the agriculture of Saudi Arabia and plant growth-promoting fungi such as Trichoderma spp., which can tolerate high temperatures and salinities, are advantageous to agricultural production in this region. Considering the potential for Trichoderma species to increase plant growth and control phytopathogenic fungi, the present characterization of Trichoderma biodiversity was undertaken. Trichoderma is a globally dispersed, ubiquitous genus in the family Hypocreaceae, and Trichoderma fungi may be found in various soil types and root ecosystems, particularly those rich in organic materials. Trichoderma fungi reproduce asexually by producing conidia and chlamydospores and by producing ascospores in their natural habitats. Some of the most beneficial effects of Trichoderma spp. on plants include the control of minor infections, the delivery of dissolved nutrients, increased nutrient intake, increased glucose metabolism and photosynthesis, phytohormone production, bioremediation of heavy metals and environmental contaminants, and use in xenobiotic bioremediation – Trichoderma spp. can promote host plant resistance to a variety of biotic and abiotic stresses; they improve plant resistance to environmental challenges, including salt and drought, by stimulating plant growth, reprogramming gene expression in roots and shoots, maintaining nutritional uptake, and activating protective mechanisms to prevent oxidative damage – . Applying Trichoderma spp. to seeds, seedlings, and pathogen-free soils has been shown to stimulate plant growth . Symbioses occur between crops and soil microorganisms, such as plant growth-promoting rhizobacteria (PGPR) and plant growth-promoting fungi (PGPF), which are both natural biostimulants. Trichoderma spp. that are PGPF have been used commercially to suppress phytopathogens such as Fusarium oxysporum , Rhizoctonia solani , Armillaria mellea , and Chondrostereum purpureum . However, the antagonistic capability and biostimulant action of Trichoderma vary greatly, resulting in strains with predominant biostimulant action and others with predominant agonistic action . As a result, some Trichoderma strains are better suited for biological control as biopesticides, while others are better suited for boosting crop growth and nutrient uptake as biostimulants Trichoderma species are effective mycoparasites that produce numerous secondary compounds, many of which have clinical significance . Additionally, they have the ability to detect, penetrate, and destroy other fungi and certain nematodes, which contributes to their commercial success as biopesticides (more than 60% of all registered biopesticides contain Trichoderma ). Trichoderma species are distinguished by their rapid growth, ability to assimilate a wide range of substrates, and ability to produce a variety of antimicrobial agents. Trichoderma species synthesize siderophores as secondary metabolites with antibacterial properties that inhibit the growth of soil pathogens by scavenging iron and inactivating iron-dependent enzymes; thus, the fungi reduce plant disease and enhance plant growth , . Several Trichoderma species also have the ability to synthesize phytohormones and phytoregulators, including indole-3-acetic acid (IAA), which is important for plant growth and development , . Moreover, Trichoderma increase the bioavailability of phosphorus by breaking down insoluble phosphate in the soil via phytases, which facilitate and enhance the uptake of nutrients by plants. Previous studies have documented the strain-dependent growth-promoting effect of Trichoderma spp. on various plants, as well as the ability of different Trichoderma spp. to provide protection against plant diseases , . Attempts to understand the diversity and geographical distribution of Trichoderma/Hypocrea have resulted in global observations of the genus , but unfortunately, there have been few studies of Trichoderma in Saudi Arabia , – . The goal of this study was to isolate and characterize Trichoderma strains from different regions of Saudi Arabia and to evaluate their growth-promoting effects on plants.
Sample collection and Trichoderma isolation Soil samples were collected at six sites each in the Abha and Riyadh regions, Saudi Arabia. Seventy-two soil samples were collected from all six sites in each region, Riyadh and Abha. Nineteen Trichoderma strains were obtained from these soil samples. The soil dilution plate method was used for the isolation of fungi , . The morphological and colony characteristics of Trichoderma isolates were studied in potato dextrose agar (PDA) (HIMEDIA, India), medium and Trichoderma selective medium (TSM), following previous studies , . The macro characteristics (colony radius, pigments, green conidia, odor, and colony appearance) and microcharacteristics (phialide, conidium, and presence of chlamydospores) were observed. Determination of physical and chemical soil properties A pH meter was employed to measure the soil pH. For pH measurement, a soil suspension with a ratio of 1:2.5 (soil to water) was prepared and shaken for an hour . Additionally, an electric conductivity meter assessed the electrical conductivity (EC) in the soil’s saturated paste extract. Meanwhile, soil moisture content (MC) was determined by oven drying at 103 °C for 12 h. The following equation calculates the percent moisture content in soil: [12pt]{minimal}
$${}\; ( \% )=( {{} - {}/{}} ) 100$$ where: MC represents the moisture content in percentage. (Wf) is the weight of fresh soil. (Wd) is the weight of soil after drying in the oven. To estimate organic matter (OM), the loss of ignition method was used . The following equation calculates the percent organic matter in soil. [12pt]{minimal}
$${}\; ( \% ) = ( {{}2 - {}3} )/( {{}2 - {}1} ) 100$$ where OM is percent organic matter in soil, W1 = Weight of the crucible, W2 = Weight of the crucible + oven dry sample, W3 = Weight of the crucible + oven dry sample after ignition. To determine whether the Trichoderma strains selected for the studies were nonpathogenic, a plant pathogenicity test was conducted. Twenty healthy two-week-old tomato plants were inoculated with a suspension of Trichoderma strain (1 × 10 7 CFU/ml), and the roots were inoculated via the root-dip method. Inoculated plants were transplanted singly into steam-sterilized peat moss soil and sand mixed at a 5:1 ratio. Seedlings inoculated with sterilized water (1 ml per plant) served as controls. After one week, the plants were observed for any visible symptoms. The pathogenicity test was conducted twice. Molecular identification of and phylogenetic analysis of isolated Trichoderma species DNA extraction, PCR amplification, and molecular identification of species of Trichoderma were performed . Phylogenetic analysis For phylogenetic characterization of the isolated Trichoderma strains, the relevant downloaded sequences (Table ) were aligned using Clustal W-pairwise sequence alignment of the EMBL nucleotide sequence database. The sequence alignments were trimmed and verified by the MUSCLE (UPGMA) algorithm using MEGA11 software, Auckland, New Zealand A phylogenetic tree was reconstructed, and the evolutionary history was inferred using the neighbor‒joining method. The robustness of the internal branches was assessed with 500 bootstrap replications. Evolutionary distances were computed using the maximum composite likelihood method and were calculated in units of the number of base substitutions per site. Biochemical characterization of plant growth-promoting Trichoderma spp. The isolated Trichoderma strains were assessed for phosphate solubilization and IAA, ammonia, and siderophore production. Phosphate solubilization activity The ability of 8 Trichoderma isolates to solubilize and mineralize phosphate (P) in vitro was evaluated, and qualitative screening of phosphate solubilization was performed on Pikovskaya agar medium (HIMEDIA, India) . Indole-3-acetic acid (IAA) production For the quantitative estimation of IAA, DF salts in minimal media supplemented with L-tryptophan at a concentration of 1.02 g/l were prepared (HIMEDIA, INDIA) , . Ammonia production Freshly grown Trichoderma isolates were cultured in peptone water (HIMEDIA, India) broth in test tubes at 28 °C for 2 days . Siderophore production Modified chrome azurol S (CAS) agar (HIMEDIA, India) with King’s media (Kings Media, Kochi, Kerala) (pH 6.8) was used . In vivo evaluation of the effect of Trichoderma isolates on tomato plant growth The plant growth-promoting activity of the Trichoderma isolates was assessed by analyzing the seed germination and seedling growth of tomato plants . Effect of Trichoderma isolates on seed germination Tomato seeds (Roma VF) were purchased from a local market (Salam Street) in Riyadh, Saudi Arabia. All methods were performed in accordance with the relevant guidelines/regulations/legislation. Fifty tomato seeds of uniform size were surface sterilized with 0.5% NaClO (Xilong Scientific Co., Ltd., China), for 5 min and washed five times with sterile water. Ten seeds were transferred to Petri plates covered with a layer of cotton and filter paper. A spore suspension of the Trichoderma isolates (10 5 spores/ml) was poured over the seeds (100 µl/seed), and the plates were cultured for 7 d at 28 °C under 12 h/12 h light/dark conditions. Seeds treated with an equivalent volume of sterile water were used as controls. At the end of two weeks, the germination rate was calculated using the following equation: [12pt]{minimal}
$${}( \% )\; ( {{}} )\,=\,{} 100$$ where Gs = the number of seeds germinated, and Ts = the total number of seeds . Effect of Trichoderma isolates on tomato plant growth Tomato seeds were surface sterilized, treated with Trichoderma , and sown in pots containing 5 g of sterile soil. The pots were watered as required with sterile water. After 2 weeks, the seedlings were treated (0.5 ml) with Trichoderma strains (10 5 spores/ml), allowed to grow for two weeks, treated again with Trichoderma strains (10 5 spores/ml), and grown for an additional 2 weeks. Seedlings were watered with sterile water as needed. Six weeks after germination, the experiment was terminated, and the plants were uprooted. The plants were removed from the soil, and the roots were washed carefully under running water. The plant shoot height, root length, and wet weight were measured. The plants were then dried at 105 °C for 30 min and subsequently at 50 °C for 24 h, and plant dry weight was recorded , .
Trichoderma isolation Soil samples were collected at six sites each in the Abha and Riyadh regions, Saudi Arabia. Seventy-two soil samples were collected from all six sites in each region, Riyadh and Abha. Nineteen Trichoderma strains were obtained from these soil samples. The soil dilution plate method was used for the isolation of fungi , . The morphological and colony characteristics of Trichoderma isolates were studied in potato dextrose agar (PDA) (HIMEDIA, India), medium and Trichoderma selective medium (TSM), following previous studies , . The macro characteristics (colony radius, pigments, green conidia, odor, and colony appearance) and microcharacteristics (phialide, conidium, and presence of chlamydospores) were observed.
A pH meter was employed to measure the soil pH. For pH measurement, a soil suspension with a ratio of 1:2.5 (soil to water) was prepared and shaken for an hour . Additionally, an electric conductivity meter assessed the electrical conductivity (EC) in the soil’s saturated paste extract. Meanwhile, soil moisture content (MC) was determined by oven drying at 103 °C for 12 h. The following equation calculates the percent moisture content in soil: [12pt]{minimal}
$${}\; ( \% )=( {{} - {}/{}} ) 100$$ where: MC represents the moisture content in percentage. (Wf) is the weight of fresh soil. (Wd) is the weight of soil after drying in the oven. To estimate organic matter (OM), the loss of ignition method was used . The following equation calculates the percent organic matter in soil. [12pt]{minimal}
$${}\; ( \% ) = ( {{}2 - {}3} )/( {{}2 - {}1} ) 100$$ where OM is percent organic matter in soil, W1 = Weight of the crucible, W2 = Weight of the crucible + oven dry sample, W3 = Weight of the crucible + oven dry sample after ignition. To determine whether the Trichoderma strains selected for the studies were nonpathogenic, a plant pathogenicity test was conducted. Twenty healthy two-week-old tomato plants were inoculated with a suspension of Trichoderma strain (1 × 10 7 CFU/ml), and the roots were inoculated via the root-dip method. Inoculated plants were transplanted singly into steam-sterilized peat moss soil and sand mixed at a 5:1 ratio. Seedlings inoculated with sterilized water (1 ml per plant) served as controls. After one week, the plants were observed for any visible symptoms. The pathogenicity test was conducted twice.
Trichoderma species DNA extraction, PCR amplification, and molecular identification of species of Trichoderma were performed . Phylogenetic analysis For phylogenetic characterization of the isolated Trichoderma strains, the relevant downloaded sequences (Table ) were aligned using Clustal W-pairwise sequence alignment of the EMBL nucleotide sequence database. The sequence alignments were trimmed and verified by the MUSCLE (UPGMA) algorithm using MEGA11 software, Auckland, New Zealand A phylogenetic tree was reconstructed, and the evolutionary history was inferred using the neighbor‒joining method. The robustness of the internal branches was assessed with 500 bootstrap replications. Evolutionary distances were computed using the maximum composite likelihood method and were calculated in units of the number of base substitutions per site.
For phylogenetic characterization of the isolated Trichoderma strains, the relevant downloaded sequences (Table ) were aligned using Clustal W-pairwise sequence alignment of the EMBL nucleotide sequence database. The sequence alignments were trimmed and verified by the MUSCLE (UPGMA) algorithm using MEGA11 software, Auckland, New Zealand A phylogenetic tree was reconstructed, and the evolutionary history was inferred using the neighbor‒joining method. The robustness of the internal branches was assessed with 500 bootstrap replications. Evolutionary distances were computed using the maximum composite likelihood method and were calculated in units of the number of base substitutions per site.
Trichoderma spp. The isolated Trichoderma strains were assessed for phosphate solubilization and IAA, ammonia, and siderophore production. Phosphate solubilization activity The ability of 8 Trichoderma isolates to solubilize and mineralize phosphate (P) in vitro was evaluated, and qualitative screening of phosphate solubilization was performed on Pikovskaya agar medium (HIMEDIA, India) . Indole-3-acetic acid (IAA) production For the quantitative estimation of IAA, DF salts in minimal media supplemented with L-tryptophan at a concentration of 1.02 g/l were prepared (HIMEDIA, INDIA) , . Ammonia production Freshly grown Trichoderma isolates were cultured in peptone water (HIMEDIA, India) broth in test tubes at 28 °C for 2 days . Siderophore production Modified chrome azurol S (CAS) agar (HIMEDIA, India) with King’s media (Kings Media, Kochi, Kerala) (pH 6.8) was used .
The ability of 8 Trichoderma isolates to solubilize and mineralize phosphate (P) in vitro was evaluated, and qualitative screening of phosphate solubilization was performed on Pikovskaya agar medium (HIMEDIA, India) .
For the quantitative estimation of IAA, DF salts in minimal media supplemented with L-tryptophan at a concentration of 1.02 g/l were prepared (HIMEDIA, INDIA) , .
Freshly grown Trichoderma isolates were cultured in peptone water (HIMEDIA, India) broth in test tubes at 28 °C for 2 days .
Modified chrome azurol S (CAS) agar (HIMEDIA, India) with King’s media (Kings Media, Kochi, Kerala) (pH 6.8) was used .
Trichoderma isolates on tomato plant growth The plant growth-promoting activity of the Trichoderma isolates was assessed by analyzing the seed germination and seedling growth of tomato plants . Effect of Trichoderma isolates on seed germination Tomato seeds (Roma VF) were purchased from a local market (Salam Street) in Riyadh, Saudi Arabia. All methods were performed in accordance with the relevant guidelines/regulations/legislation. Fifty tomato seeds of uniform size were surface sterilized with 0.5% NaClO (Xilong Scientific Co., Ltd., China), for 5 min and washed five times with sterile water. Ten seeds were transferred to Petri plates covered with a layer of cotton and filter paper. A spore suspension of the Trichoderma isolates (10 5 spores/ml) was poured over the seeds (100 µl/seed), and the plates were cultured for 7 d at 28 °C under 12 h/12 h light/dark conditions. Seeds treated with an equivalent volume of sterile water were used as controls. At the end of two weeks, the germination rate was calculated using the following equation: [12pt]{minimal}
$${}( \% )\; ( {{}} )\,=\,{} 100$$ where Gs = the number of seeds germinated, and Ts = the total number of seeds . Effect of Trichoderma isolates on tomato plant growth Tomato seeds were surface sterilized, treated with Trichoderma , and sown in pots containing 5 g of sterile soil. The pots were watered as required with sterile water. After 2 weeks, the seedlings were treated (0.5 ml) with Trichoderma strains (10 5 spores/ml), allowed to grow for two weeks, treated again with Trichoderma strains (10 5 spores/ml), and grown for an additional 2 weeks. Seedlings were watered with sterile water as needed. Six weeks after germination, the experiment was terminated, and the plants were uprooted. The plants were removed from the soil, and the roots were washed carefully under running water. The plant shoot height, root length, and wet weight were measured. The plants were then dried at 105 °C for 30 min and subsequently at 50 °C for 24 h, and plant dry weight was recorded , .
Trichoderma isolates on seed germination Tomato seeds (Roma VF) were purchased from a local market (Salam Street) in Riyadh, Saudi Arabia. All methods were performed in accordance with the relevant guidelines/regulations/legislation. Fifty tomato seeds of uniform size were surface sterilized with 0.5% NaClO (Xilong Scientific Co., Ltd., China), for 5 min and washed five times with sterile water. Ten seeds were transferred to Petri plates covered with a layer of cotton and filter paper. A spore suspension of the Trichoderma isolates (10 5 spores/ml) was poured over the seeds (100 µl/seed), and the plates were cultured for 7 d at 28 °C under 12 h/12 h light/dark conditions. Seeds treated with an equivalent volume of sterile water were used as controls. At the end of two weeks, the germination rate was calculated using the following equation: [12pt]{minimal}
$${}( \% )\; ( {{}} )\,=\,{} 100$$ where Gs = the number of seeds germinated, and Ts = the total number of seeds .
Trichoderma isolates on tomato plant growth Tomato seeds were surface sterilized, treated with Trichoderma , and sown in pots containing 5 g of sterile soil. The pots were watered as required with sterile water. After 2 weeks, the seedlings were treated (0.5 ml) with Trichoderma strains (10 5 spores/ml), allowed to grow for two weeks, treated again with Trichoderma strains (10 5 spores/ml), and grown for an additional 2 weeks. Seedlings were watered with sterile water as needed. Six weeks after germination, the experiment was terminated, and the plants were uprooted. The plants were removed from the soil, and the roots were washed carefully under running water. The plant shoot height, root length, and wet weight were measured. The plants were then dried at 105 °C for 30 min and subsequently at 50 °C for 24 h, and plant dry weight was recorded , .
After soil samples were collected from 6 locations in the Abha and Riyadh regions, PDA medium and TSM were used to isolate Trichoderma species. Twenty Trichoderma strains were isolated from soil samples collected from the Abha region, and 12 Trichoderma strains were isolated from soil samples collected from the Riyadh region. Subsequently, the isolates were purified and identified microscopically. The isolated Trichoderma strains were classified into six groups based on colony and morphological characteristics. A total of 16 isolates were selected for pathogenicity testing. Eight out of 34 isolates were selected for pathogenicity testing on tomato plants. From the PDA plates, a total of 8 Trichoderma strains were selected for further investigation. Soil from the Abha region had a higher diversity of Trichoderma species than soil from the Riyadh region. Among these 8 Trichoderma strains, six species ( T. koningiopsis , T. lixii , T. koningii , T. harzianum , T. brevicompactum , and T. velutinum) were isolated from Abha, and two species ( T. lixii and T. harzianum ) were isolated from Riyadh (Figs. , , , , , , and ). The isolated Trichoderma species (Figs. T1, T2, T3, T4 and T5, T6, T7 and T8) were cultured at 28 °C for 7 days. Panels T1–T4 show ( A ) the anterior side of the PDA plate, ( B ) the superior side of the PDA plate, ( C ) chlamydospores, and ( D , E , F , G , H , and I ) conidiophores and phialides; ( D ) shows conidiation pustules on Pikorskaya agar after 4 days; ( E , F ) show conidia. The populations of fungi in both regions presented maximum CFU/g values of 45.3 × 10 2 and 83 × 10 2 for the Abha region and 14 × 10 2 and 47 × 10 2 for the Riyadh region. Moreover, soil from the Abha region exhibited greater fungal diversity than soil from the Riyadh region. Molecular identification and phylogenetic analysis of the isolated Trichoderma species The identities of the Trichoderma isolates were confirmed by molecular analysis. The internal transcribed spacer (ITS) region of fungal 18s rDNA was amplified using primers ITS4 and ITS. Searches using the Basic Local Alignment Search Tool (BLAST) (NCBI GenBank) were performed, and the results are presented in Table . Phylogenetic analysis A phylogenetic tree was constructed by analyzing nineteen sequences, including the sequences of the eight isolated Trichoderma species, 10 Trichoderma species from GenBank and a Fusarium oxysporum (MT151384) sequence as an outgroup (Fig. ). MEGA11 was used for evolutionary analysis. The evolutionary history was inferred using the neighbor-joining method, and the evolutionary distances were computed using the maximum composite likelihood method. The results revealed a total of 464 positions in the final dataset. We observed that T. harzianum , T. velutinum , and T. lixii were closely related and belonged to the Harzianum clade (Clade 1), while T. brevicompactum belonged to the Brevicompactum clade (Clade 2), and T. koningiopsis and T. koningii belonged to the Viride clade (Clade 3). Characterization of plant growth-promoting Trichoderma spp. Biochemical analyses were performed to characterize the plant growth-promoting activity of Trichoderma . The isolates were assessed for phosphate solubilization, and IAA, ammonia, and siderophore production. The results are presented in Table . Biochemical tests were performed to evaluate the promotion factors detected in Trichoderma species. The phosphate solubilization efficacy of the Trichoderma isolates was evaluated on Pikovaskaya agar by acidification, and all eight isolates utilized trisodium phosphate in Pikovaskaya agar and showed positive results (Fig. a,b). The ability to produce ammonia differed by isolate (Table ). The highest production was exhibited by T4 ( T. harzianum ), T5 ( T. lixii ) and T7 (T. harzianum ), and the other isolates, T1 ( T. koniniopsis ), T2 ( T. lixii ), T3 (T. koningii ), T6 ( T. brevicompactum ), and T8 ( T. velutinum ), displayed moderate production (Table ; Fig. c). Qualitative and quantitative analyses were conducted to determine IAA production by the eight Trichoderma isolates in culture media supplemented with tryptophan as a precursor. Interpolation of spectrophotometer readings using standard curves was used to quantify the amount of IAA produced by different isolates of Trichoderma . The production of IAA differed by isolate (Table ). A high amount of IAA was produced by T. brevicompactum (51.24 ± 0.18 µg/ml) and T. lixii (50.82 ± 0.65 µg/ml), whereas T. koniniopsis (0.15 ± 0.052 µg/ml) was the lowest producer in media supplemented with tryptophan (Fig. d). Siderophore production also differed by isolate (Table ). The ability to produce siderophores was demonstrated by the formation of orange halos around the colonies on the blue modified CAS agar plate (Fig. e,f). The T3 and T4 isolates ( T. koningii and T. harzianum , respectively) showed maximum zone formation, which was observed after 5 d. Approximately 25% of the isolates showed high siderophore production, and the remaining isolates were moderate producers. No zone formation was observed for isolates T5 ( T. lixii ) and T8 ( T. velutinum ). In vivo evaluation of the effect of plant growth promoting Trichoderma on tomato plant growth Effect of Trichoderma isolates on seed germination Tomato seeds treated with Trichoderma isolates were observed for one week for seed germination. Compared with the control, priming with Trichoderma isolates significantly increased seed germination ( P ≤ 0.05), except for the T3 isolate. Seed germination was 100% in seeds treated with the T4 and T6 isolates, while seeds treated with the T1 and T5 isolates showed 91.1% and 90.9% seed germination, respectively. The seed germination rates for the T8, T2, and T7 isolates were 84.1%, 82.2%, and 72.7%, respectively. Seed germination after treatment with the T3 isolate was statistically equivalent to the control ( P ≤ 0.05) (Fig. ). Effects of Trichoderma isolates on tomato plant growth Overall, Trichoderma isolates significantly ( P ≤ 0.05) increased tomato plant growth compared to that of untreated control plants (Fig. ). Among the plants that received Trichoderma isolates, the greatest increase in shoot height was observed for the plants treated with T5- T. lixii (16.16 cm) followed by those treated with T7 -T. harzianum (13.33 cm), T1- T. koniniopsis (11.33 cm), T2- T. lixii (10.83 cm), T4- T. harzianum (10.83 cm), T8- T. velutinum (10.16 cm), T6- T. brevicompactum (7.00 cm), and T3- T. koningii (5.70 cm). Post hoc analysis of shoot height indicated that most of the treatments were significantly different ( P ≤ 0.05) from each other, except for T3- T. koningii , which was equivalent to control plants. Conversely, the greatest increase in root length was recorded in the plants treated with T7 -T. harzianum (7.23 cm), followed by those treated with T5- T. lixii (6.83 cm), T8- T. velutinum (5.16 cm), T4- T. harzianum (4.53 cm), T2- T. lixii (3.30 cm), T3- T. koningii (3.20 cm), T6- T. brevicompactum (2.76 cm) and T1- T. koniniopsis (2.40 cm). Post hoc analysis of plant root length revealed that there was no significant difference ( P ≤ 0.05) between T7 -T. harzianum and T5- T. lixii or among the T2- T. lixii , T3- T. koningii , and T6- T. brevicompactum treatments. The greatest plant fresh weight was observed for the plants treated with T5- T. lixii (669.33 mg), followed by those treated with T7 -T. harzianum (359.33 mg), T8- T. velutinum (299.67 mg), T3- T. koningii (284.33 mg), T4- T. harzianum (197.33 mg), T1- T. koniniopsis (193.0 mg), T2- T. lixii (146.33 mg), and T6- T. brevicompactum (73.00 mg). The maximum plant dry weight was observed for the plants treated with T5- T. lixii (28.7 mg), T7 -T. harzianum (22.67 mg), T8- T. velutinum (13.4 mg), T1- T. koniniopsis (11.4 mg), T4- T. harzianum (10.37 mg), T2- T. lixii (9.67 mg), T3- T. koningii (7.7 mg), and T6- T. brevicompactum (5.7 mg). Analysis of plant fresh and dry weight revealed a significant ( P ≤ 0.05) difference between the control plants and the plants in the other treatment groups. However, there was no significant difference in the fresh weight of plants in the control and T6- T. brevicompactum groups or among the T1- T. koniniopsis , T2- T. lixii , and T4- T. harzianum groups. A significant difference ( P ≤ 0.05) in plant dry weight was detected among all treatments (Table ). Principal component analysis (PCA) for plant growth parameters and seed germination was performed to understand the effect of Trichoderma isolates on plant growth. The PCA biplot in Fig. shows that shoot height, root length, plant fresh weight, and plant dry weight were highly correlated. Seed germination was moderately correlated with the other plant growth parameters. Trichoderma isolate T5- T. lixii had the greatest positive impact on plant growth; T7 -T. harzianum and T8- T. velutinum also fell into the same quadrant, demonstrating a positive effect on plant growth. T4- T. harzianum also increased plant growth, as it fell into the second quadrant. T1- T. koniniopsis , T2- T. lixii , and T6- T. brevicompactum moderately increased plant growth, whereas plants that received T3- T. koningii were similar in growth to control plants.
Trichoderma species The identities of the Trichoderma isolates were confirmed by molecular analysis. The internal transcribed spacer (ITS) region of fungal 18s rDNA was amplified using primers ITS4 and ITS. Searches using the Basic Local Alignment Search Tool (BLAST) (NCBI GenBank) were performed, and the results are presented in Table .
A phylogenetic tree was constructed by analyzing nineteen sequences, including the sequences of the eight isolated Trichoderma species, 10 Trichoderma species from GenBank and a Fusarium oxysporum (MT151384) sequence as an outgroup (Fig. ). MEGA11 was used for evolutionary analysis. The evolutionary history was inferred using the neighbor-joining method, and the evolutionary distances were computed using the maximum composite likelihood method. The results revealed a total of 464 positions in the final dataset. We observed that T. harzianum , T. velutinum , and T. lixii were closely related and belonged to the Harzianum clade (Clade 1), while T. brevicompactum belonged to the Brevicompactum clade (Clade 2), and T. koningiopsis and T. koningii belonged to the Viride clade (Clade 3).
Trichoderma spp. Biochemical analyses were performed to characterize the plant growth-promoting activity of Trichoderma . The isolates were assessed for phosphate solubilization, and IAA, ammonia, and siderophore production. The results are presented in Table . Biochemical tests were performed to evaluate the promotion factors detected in Trichoderma species. The phosphate solubilization efficacy of the Trichoderma isolates was evaluated on Pikovaskaya agar by acidification, and all eight isolates utilized trisodium phosphate in Pikovaskaya agar and showed positive results (Fig. a,b). The ability to produce ammonia differed by isolate (Table ). The highest production was exhibited by T4 ( T. harzianum ), T5 ( T. lixii ) and T7 (T. harzianum ), and the other isolates, T1 ( T. koniniopsis ), T2 ( T. lixii ), T3 (T. koningii ), T6 ( T. brevicompactum ), and T8 ( T. velutinum ), displayed moderate production (Table ; Fig. c). Qualitative and quantitative analyses were conducted to determine IAA production by the eight Trichoderma isolates in culture media supplemented with tryptophan as a precursor. Interpolation of spectrophotometer readings using standard curves was used to quantify the amount of IAA produced by different isolates of Trichoderma . The production of IAA differed by isolate (Table ). A high amount of IAA was produced by T. brevicompactum (51.24 ± 0.18 µg/ml) and T. lixii (50.82 ± 0.65 µg/ml), whereas T. koniniopsis (0.15 ± 0.052 µg/ml) was the lowest producer in media supplemented with tryptophan (Fig. d). Siderophore production also differed by isolate (Table ). The ability to produce siderophores was demonstrated by the formation of orange halos around the colonies on the blue modified CAS agar plate (Fig. e,f). The T3 and T4 isolates ( T. koningii and T. harzianum , respectively) showed maximum zone formation, which was observed after 5 d. Approximately 25% of the isolates showed high siderophore production, and the remaining isolates were moderate producers. No zone formation was observed for isolates T5 ( T. lixii ) and T8 ( T. velutinum ).
Trichoderma on tomato plant growth Effect of Trichoderma isolates on seed germination Tomato seeds treated with Trichoderma isolates were observed for one week for seed germination. Compared with the control, priming with Trichoderma isolates significantly increased seed germination ( P ≤ 0.05), except for the T3 isolate. Seed germination was 100% in seeds treated with the T4 and T6 isolates, while seeds treated with the T1 and T5 isolates showed 91.1% and 90.9% seed germination, respectively. The seed germination rates for the T8, T2, and T7 isolates were 84.1%, 82.2%, and 72.7%, respectively. Seed germination after treatment with the T3 isolate was statistically equivalent to the control ( P ≤ 0.05) (Fig. ). Effects of Trichoderma isolates on tomato plant growth Overall, Trichoderma isolates significantly ( P ≤ 0.05) increased tomato plant growth compared to that of untreated control plants (Fig. ). Among the plants that received Trichoderma isolates, the greatest increase in shoot height was observed for the plants treated with T5- T. lixii (16.16 cm) followed by those treated with T7 -T. harzianum (13.33 cm), T1- T. koniniopsis (11.33 cm), T2- T. lixii (10.83 cm), T4- T. harzianum (10.83 cm), T8- T. velutinum (10.16 cm), T6- T. brevicompactum (7.00 cm), and T3- T. koningii (5.70 cm). Post hoc analysis of shoot height indicated that most of the treatments were significantly different ( P ≤ 0.05) from each other, except for T3- T. koningii , which was equivalent to control plants. Conversely, the greatest increase in root length was recorded in the plants treated with T7 -T. harzianum (7.23 cm), followed by those treated with T5- T. lixii (6.83 cm), T8- T. velutinum (5.16 cm), T4- T. harzianum (4.53 cm), T2- T. lixii (3.30 cm), T3- T. koningii (3.20 cm), T6- T. brevicompactum (2.76 cm) and T1- T. koniniopsis (2.40 cm). Post hoc analysis of plant root length revealed that there was no significant difference ( P ≤ 0.05) between T7 -T. harzianum and T5- T. lixii or among the T2- T. lixii , T3- T. koningii , and T6- T. brevicompactum treatments. The greatest plant fresh weight was observed for the plants treated with T5- T. lixii (669.33 mg), followed by those treated with T7 -T. harzianum (359.33 mg), T8- T. velutinum (299.67 mg), T3- T. koningii (284.33 mg), T4- T. harzianum (197.33 mg), T1- T. koniniopsis (193.0 mg), T2- T. lixii (146.33 mg), and T6- T. brevicompactum (73.00 mg). The maximum plant dry weight was observed for the plants treated with T5- T. lixii (28.7 mg), T7 -T. harzianum (22.67 mg), T8- T. velutinum (13.4 mg), T1- T. koniniopsis (11.4 mg), T4- T. harzianum (10.37 mg), T2- T. lixii (9.67 mg), T3- T. koningii (7.7 mg), and T6- T. brevicompactum (5.7 mg). Analysis of plant fresh and dry weight revealed a significant ( P ≤ 0.05) difference between the control plants and the plants in the other treatment groups. However, there was no significant difference in the fresh weight of plants in the control and T6- T. brevicompactum groups or among the T1- T. koniniopsis , T2- T. lixii , and T4- T. harzianum groups. A significant difference ( P ≤ 0.05) in plant dry weight was detected among all treatments (Table ). Principal component analysis (PCA) for plant growth parameters and seed germination was performed to understand the effect of Trichoderma isolates on plant growth. The PCA biplot in Fig. shows that shoot height, root length, plant fresh weight, and plant dry weight were highly correlated. Seed germination was moderately correlated with the other plant growth parameters. Trichoderma isolate T5- T. lixii had the greatest positive impact on plant growth; T7 -T. harzianum and T8- T. velutinum also fell into the same quadrant, demonstrating a positive effect on plant growth. T4- T. harzianum also increased plant growth, as it fell into the second quadrant. T1- T. koniniopsis , T2- T. lixii , and T6- T. brevicompactum moderately increased plant growth, whereas plants that received T3- T. koningii were similar in growth to control plants.
Trichoderma isolates on seed germination Tomato seeds treated with Trichoderma isolates were observed for one week for seed germination. Compared with the control, priming with Trichoderma isolates significantly increased seed germination ( P ≤ 0.05), except for the T3 isolate. Seed germination was 100% in seeds treated with the T4 and T6 isolates, while seeds treated with the T1 and T5 isolates showed 91.1% and 90.9% seed germination, respectively. The seed germination rates for the T8, T2, and T7 isolates were 84.1%, 82.2%, and 72.7%, respectively. Seed germination after treatment with the T3 isolate was statistically equivalent to the control ( P ≤ 0.05) (Fig. ).
Trichoderma isolates on tomato plant growth Overall, Trichoderma isolates significantly ( P ≤ 0.05) increased tomato plant growth compared to that of untreated control plants (Fig. ). Among the plants that received Trichoderma isolates, the greatest increase in shoot height was observed for the plants treated with T5- T. lixii (16.16 cm) followed by those treated with T7 -T. harzianum (13.33 cm), T1- T. koniniopsis (11.33 cm), T2- T. lixii (10.83 cm), T4- T. harzianum (10.83 cm), T8- T. velutinum (10.16 cm), T6- T. brevicompactum (7.00 cm), and T3- T. koningii (5.70 cm). Post hoc analysis of shoot height indicated that most of the treatments were significantly different ( P ≤ 0.05) from each other, except for T3- T. koningii , which was equivalent to control plants. Conversely, the greatest increase in root length was recorded in the plants treated with T7 -T. harzianum (7.23 cm), followed by those treated with T5- T. lixii (6.83 cm), T8- T. velutinum (5.16 cm), T4- T. harzianum (4.53 cm), T2- T. lixii (3.30 cm), T3- T. koningii (3.20 cm), T6- T. brevicompactum (2.76 cm) and T1- T. koniniopsis (2.40 cm). Post hoc analysis of plant root length revealed that there was no significant difference ( P ≤ 0.05) between T7 -T. harzianum and T5- T. lixii or among the T2- T. lixii , T3- T. koningii , and T6- T. brevicompactum treatments. The greatest plant fresh weight was observed for the plants treated with T5- T. lixii (669.33 mg), followed by those treated with T7 -T. harzianum (359.33 mg), T8- T. velutinum (299.67 mg), T3- T. koningii (284.33 mg), T4- T. harzianum (197.33 mg), T1- T. koniniopsis (193.0 mg), T2- T. lixii (146.33 mg), and T6- T. brevicompactum (73.00 mg). The maximum plant dry weight was observed for the plants treated with T5- T. lixii (28.7 mg), T7 -T. harzianum (22.67 mg), T8- T. velutinum (13.4 mg), T1- T. koniniopsis (11.4 mg), T4- T. harzianum (10.37 mg), T2- T. lixii (9.67 mg), T3- T. koningii (7.7 mg), and T6- T. brevicompactum (5.7 mg). Analysis of plant fresh and dry weight revealed a significant ( P ≤ 0.05) difference between the control plants and the plants in the other treatment groups. However, there was no significant difference in the fresh weight of plants in the control and T6- T. brevicompactum groups or among the T1- T. koniniopsis , T2- T. lixii , and T4- T. harzianum groups. A significant difference ( P ≤ 0.05) in plant dry weight was detected among all treatments (Table ). Principal component analysis (PCA) for plant growth parameters and seed germination was performed to understand the effect of Trichoderma isolates on plant growth. The PCA biplot in Fig. shows that shoot height, root length, plant fresh weight, and plant dry weight were highly correlated. Seed germination was moderately correlated with the other plant growth parameters. Trichoderma isolate T5- T. lixii had the greatest positive impact on plant growth; T7 -T. harzianum and T8- T. velutinum also fell into the same quadrant, demonstrating a positive effect on plant growth. T4- T. harzianum also increased plant growth, as it fell into the second quadrant. T1- T. koniniopsis , T2- T. lixii , and T6- T. brevicompactum moderately increased plant growth, whereas plants that received T3- T. koningii were similar in growth to control plants.
Fungi from the genus Trichoderma have been widely used in agriculture because of their mycoparasitic potential and their ability to improve plant health and protect against phytopathogens, making them desirable symbionts. The goal of the current study was the isolation, molecular identification, and characterization of Trichoderma from Saudi Arabia and the evaluation of their ability to promote plant growth. Soil properties of the samples collected from Abha and Riyadh were also determined. In the Riyadh region, soil pH varied from mild to extremely alkaline, while soil in the Abha region had a neutral pH. The EC values also varied widely in both regions. Previous studies have reported different pH and EC values in the Riyadh region. Masoud & Aal reported an average pH of 7.64; however, their sample sites were different from those in the present study . Al Barakah et al. reported mean pH and EC values of 7.7 and 1.03 dS/m, respectively. Siham reported a pH range of 7.58–7.76 and EC values of 22.5–32.0 from an industrial city in Riyadh. Irrigation water and dissolving soil minerals are some sources of salts in soil. Moreover, the Riyadh region is rich in weathered limestone, and the climate is exceedingly arid; therefore, evapotranspiration exceeds precipitation. These factors may contribute to higher pH and alkalinity , . The organic matter content of soil in the Abha region was greater than that in the Riyadh region. Since most soil in the Abha region was from a vegetation area, organic matter can be attributed to the decomposition of dead plant materials. As expected, the moisture content varied according to the sample collection site. The soil samples collected near a water body had a higher moisture content than the other samples. The Riyadh and Abha regions both had large populations of soil fungi. Recently, fungal populations were shown to range from 4.19 to 4.67 CFU/g in the Riyadh region, and the presence of Trichoderma was also observed , , – . The soil fungi of Saudi Arabia have been investigated previously, and although Trichoderma was not detected in desert soil samples , other studies reported Trichoderma in soils from other sources – . In the present study, we isolated eight Trichoderma species ( T. koningiopsis , T. lixii , T. koningii , T. harzianum , T. brevicompactum , T. velutinum , T. lixii , and T. harzianum ) from the Abha and Riyadh regions. Similarly, Hussein & Yousef isolated two species of Trichoderma ( T. harzianum and Trichoderma sp.) from petroleum-contaminated soil . Additionally, Abd-Elsalam identified two Trichoderma complex species (T. harzianum/H. lixii and T. longibrachiatum/H. orientalis ) from soil collected from Rawdet Khuraim in Saudi Arabia using morphological criteria and DNA sequence analysis . Molecular characterization based on multiple gene sequencing enables the accurate identification of Trichoderma species. Previous reports of Trichoderma isolation and characterization in Saudi Arabia depended on morphological methods, which may not distinguish closely related species. Globally, in 1998, Kindermann and others attempted to study the phylogeny of the whole Trichoderma genus based on sequencing of the ITS region, and they demonstrated that this approach was a powerful method of identifying Trichoderma species. Other researchers have since reported the importance of the ITS sequence in Trichoderma species identification – . However, our study aimed to provide an extra reliable method for the identification of Trichoderma species in Saudi Arabia We targeted the ITS region of rDNA, which is one of the most frequently targeted regions for the molecular characterization of Trichoderma species. We identified T. koniniopsis , T. velutinum , T. brevicompactum , two isolates of T. harzianum , two isolates of T. lixii , and T. koningii. This is the first time that such diverse Trichoderma species have been reported in Saudi Arabia. Phylogenetic analysis revealed that the Trichoderma isolates belonged to three different clades. T. harzianum , T. velutinum and T. lixii were closely related and belonged to the Harzianum clade. This finding is consistent with that of Gherbawy , who identified 91 Trichoderma isolates based on ITS regions from the soil of Taif City, Saudi Arabia. A total of 78 isolates from the population were identified as Trichoderma harzianum (Tel. Hypocrea lixii ). Additionally, Chaverri and others reported that T. harzianum showed high similarity with T. lixii . Other molecular sequence data has demonstrated that T. harzianum is a genetically variable complex composed of one morphological species and several phylogenetic species . Trichoderma species have beneficial effects on plants, including enhancing plant growth, root structure, seed germination, viability, photosynthetic efficiency, flowering, and yield quality, thereby promoting overall plant health . In the present study, eight species of Trichoderma were investigated for plant growth-promoting traits, including phosphate solubilization and IAA, ammonia, and siderophore production. The Trichoderma isolates varied in terms of their plant growth-promoting traits. All the isolates were able to mobilize phosphate, while T. harzianum and T. lixii produced the greatest amount of ammonia. T. koningii and T. harzianum were superior in siderophore production, and T. brevicompactum and T. lixii produced the most IAA. Additionally, these eight isolates of Trichoderma were evaluated for their ability to stimulate tomato seed germination and plant growth in the early stages of seedling development. The results showed that all the Trichoderma isolates significantly increased seed germination and plant growth, especially the T5- T. lixii , T7- T. harzianum , and T8- T. velutinum isolates, which were highly effective at stimulating plant growth by increasing shoot and root length and the fresh and dry weights of shoots and roots. Our results are consistent with those obtained by Bader , who noted that a set of Trichoderma strains can produce IAA, solubilize phosphate, and promote tomato plant growth by increasing shoot length and the fresh and dry weights of shoots and roots.
The goal of the present study was to identify examples of the plant growth-promoting fungus Trichoderma in two regions of Saudi Arabia, Abha and Riyadh, by utilizing morphological and molecular tests. The soil properties of Abha and Riyadh differ significantly, as does the fungal population in these areas. Six diverse Trichoderma species were detected in Abha soil, while only two different species were isolated from Riyadh soil. Molecular identification and phylogenetic analysis confirmed the following six species: T. koniniopsis , T. velutinum , T. brevicompactum , T. harzianum , T. lixii , and T. koningii. Phylogenetic analysis based on ITS sequences grouped the strains into three clades. The Trichoderma isolates varied in phosphate solubilization and IAA, ammonia, and siderophore production, which are plant growth-promoting traits. In vivo experiments on tomato plants showed that all Trichoderma isolates except T3- T. koningii increased seed germination. The Trichoderma isolate T5- T. lixii had the greatest effect on tomato plant growth, followed by T7 -T. harzianum , T8- T. velutinum , T4- T. harzianum , T1- T. koniniopsis , T2- T. lixii , and T6- T. brevicompactum; the least effective was T3- T. koningii. To our knowledge, this is the first characterization of plant growth-promoting Trichoderma and identification of T. brevicompactum from Saudi Arabia.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
The effect of cigarette smoking and heated tobacco products on different denture materials; an in vitro study | d8354fa6-f6f5-4d85-b522-808564bafde9 | 11789351 | Dentistry[mh] | In the oral environment, dental prostheses are continuously exposed to deleterious complex endogenous and exogenous factors that might result in biodegradation that alters the physical and mechanical properties of the material; one of these is cigarette smoking. According to the World Health Organization, cigarette smoking is a public health problem reported in almost 1.3 billion people around the world , despite protracted anti-smoking campaigns, smoking remains an everyday habit. Conventional cigarette smoke (CS) is composed of a mixture of a gaseous and a particulate phase and contains toxic agents such as carbon monoxide (CO) . Pigments contained in tobacco residue (tar) can be responsible for the discoloration of both dental tissues and resin-based restorations . Also, resin-based restorations may get contaminated by heavy metals such as lead and cadmium changing the chemical and physical properties such as surface roughness, water sorption, solubility, and staining . Recently, new products known as “modified risk tobacco products” (MRTP) have been presented as an alternative to conventional cigarettes and an intermediate step in quitting the smoking habit, assuming that they contain a reduced number of harmful chemicals than regular CS , and many smoke users switched to these types of products. Therefore, the increasing use of MRTP leads to the need to evaluate the effects of such systems on the color stability of restoration materials and dental tissues . Looking closely at both types of smoking to compare their effects, the smoke that directly emerges from a lit cigarette is frequently referred to as “whole smoke.” It comprises liquid droplets suspended in an aerosol mixture of gases and semi-volatile chemicals. This phase is called the particle phase. It is commonly known as “tar” or nicotine-free particulate fraction when it is devoid of nicotine. In comparison, E-cigarettes emit an aerosol that includes nicotine and other substances, but they don’t produce the same particle matter as conventional cigarettes. Consequently, these products are thought to stain less than conventional smoking . Further comparisons between CS and HT show that CS results from incomplete tobacco combustion at temperatures reaching 900 °C. In contrast, the aerosols of heated tobacco are produced at temperatures well below 400 C. This significant difference in combustion temperatures alters the resulting chemical constituents produced, supposedly causing the majority of harmful substances in CS to be absent in heated tobacco. At the same time, those are presented in substantially smaller concentrations . Conventional cigarette smoke affects the marginal integrity of polymeric tooth restorations and denture bases, such as heat-cured, flexible, titanium-reinforced, and 3D-printed resins, and it’s natural to assume that other effects, like discoloration, surface roughness, and bacterial colonization, might also be affected . Therefore, the possibility of in vitro simulation of the staining susceptibility to smoke could be of interest. Unfortunately, there is a lack of standardization for smoke staining protocols . This study explores the claims that non-heated tobacco could be less harmful and have fewer adverse effects than cigarette smoke. Smokers are more likely to quit when they’re made aware of the adverse effects of smoking than when other strategies are employed to induce the same behaviour , according to research on smoking cessation, which was what prompted the authors to perform this study. In this research the materials used are heat cured acrylic resin and several modifications of it. Conventional heat cured acrylic resin is known for its brittleness and low impact strength. Thus attempts to modify these properties involve the use of metal wires or plates, fibers, particles or metal powder. It was noted that the addition of metal fillers, provides improved strength, thermal conductivity and makes the acrylic resin radiopaque . Thus the addition of titanium nanoparticles to acrylic resin in this study. Flexible acrylic resin, on the other hand, shows lower surface roughness, hardness and impact strength, compared to conventional heat cured acrylic resin . A recent study compared the difference in flexural strength between conventional and 3D printed acrylic resin, finding the latter inferior to the formal . This study compared the effect of CS versus heated tobacco using a custom-made chamber device on the discoloration, surface roughness, and bacterial colonization of different oral prosthesis materials. The null hypothesis was that conventional smoke and heated tobacco exposure would not significantly change the surface roughness, bacterial accumulation, and color change of the study samples and that there is no difference in the effect of both types of smoking. The Research and Ethics Committee of the Faculty of Dentistry, The British University in Egypt, reviewed and approved this research project protocol with project approval number 24 − 005. The sample size was calculated by G*Power software for Windows version 3.1.9.4 based on a previous study The minimum sample size was calculated to be 8 samples per group; it was increased to 10 samples per group to compensate for any defects. The primary outcomes are measuring changes in surface roughness, bacterial accumulation, and dental materials’ color stability due to different smoking types. Samples preparation Four different denture base materials were used to construct one hundred and twenty disc-shaped samples of 1 cm diameter and 2 mm thickness: conventional heat-cured acrylic resin (CA) (Acrostone, Egypt), flexible acrylic resin (FA) (Valplast, Valplast International Corp, USA), heat-cured acrylic resin reinforced with titanium nanoparticles (TA) (TA nanoparticles ( Nanogate, Egypt), and 3D printed acrylic resin (PA) (Nexdent, The Netherlands), composition of materials are shown in (Table ). Another sixty samples of artificial teeth were used: conventional ready-made acrylic resin teeth (Acrostone, Egypt) and 3D-printed acrylic resin teeth (Nexdent, The Netherlands). The heat-cured acrylic resin groups were constructed using the conventional compression-molding technique with a long curing cycle (74 °C for 8 h followed by 100 °C for 1 h). For the printed groups (PA and 3D printed teeth), CAD software (Exocad, Darmstadt, Germany) was used to design the samples. Then, the printing angle was set at 90 degrees, and the 3D printer (Anycubic, China) was filled with liquid resin (pink for denture base samples and white teeth samples), and the samples were subsequently printed. The denture base samples were used to assess surface roughness and biofilm formation, while the artificial teeth samples were used to determine color change. All groups were divided according to the smoking method into three subgroups: the control group with no smoking exposure (I), the conventional smoking exposure group (II), and the heated tobacco exposure group (III). All samples were stored in artificial saliva at 37 °C for 24 h to simulate the conditions of the oral cavity before any interference. Artificial saliva was obtained by dissolving the following ingredients in one liter of deionized water: Xanthan gum (0.92), KCl (1.2), NaCl (0.85), MgCl2 (0.05), NaH2PO4 (0.13), C8H8O3 (0.13) and CaCl2 (0.13) . Baseline measurements The surface roughness of all denture base samples was measured using a profilometer (JITAI8101 Surface Roughness Tester—Beijing Jitai Tech Detection Device Co. Ltd, China) at cut off 0.25 mm, number of cuts 1and range ± 40 μm. In compliance with ISO 11,562 recommendations for standardization, each sample was measured three times at different locations (the middle and sides), and the average was obtained to get the mean surface roughness values (Ra). According to the CIE L*a*b* color order system, the three color parameters of each artificial tooth specimen were measured using a VITA Easyshade spectrophotometer Advance 4.01 (VITA Zahnfabric, Bad Sackingen, Germany) at 3 different areas. Mean measurements were then calculated. Smoking standardizing device The smoking standardizing apparatus, designed and constructed at The Dentistry Research Center, Faculty of Dentistry, The British University in Egypt, was a crucial tool in this study. It was created to simulate the smoking process to investigate the effects of smoking on different dental materials. The apparatus includes a motor with a gearbox to lower its speed to 2 Hz (2 cycles per second), a crankshaft, and a connecting rod attached to a slider to convert the rotational movement into a 4.5 cm-long linear movement. A stainless-steel cylinder with an internal diameter of 12 cm (6 cm radius) with a piston to generate suction power with about 500 ml volume, simulating the tidal volume taken during smoking was designed. A cigarette or electronic smoking device is attached to a valve that allows inhalation of the smoke in one direction only, simulating the mouth. Another valve allows the exhalation in one direction only, simulating the nose. To simulate the oral cavity, a pool of water with a heater linked to a thermal sensor regulates the temperature between 36.5 and 37.5 °C with 100% humidity . The samples were mounted on 2 perforated trays to allow total exposure of all samples to the smoke equally (Fig. ). Exposure of specimens to smoking Conventional cigarettes (LM, Philip Morris International Inc., Egypt) and heated tobacco electronic cigarettes (Heets, Russet selection, Philip Morris International Inc., Italy) were used. The samples were exposed to cigarette smoke of 600 cigarettes/heets, representing 30 days of medium smoker behavior (20 cigarettes per day) . Then, the samples were gently washed with distilled water for 1 min. The control groups were mounted on the perforated trays and were placed in the smoking apparatus. A complete cycle was then performed without smoking. Postexposure measurements The surface roughness of the denture base samples was performed using the same previous parameters. The color parameters of each artificial tooth sample were measured using the same previous method, and then the color change was calculated according to the following formula: [12pt]{minimal} $$\: E_{2-1}\:=\:([ L]^2\:+\:[ a]^2\:+\:[ b]^2)^{1/2}$$ SEM assessment One sample from each group was examined by scanning electron microscopy (Thermo Fisher (USA) Quattro S Felid Emission Gun, Environmental SEM “FEG ESEM”) at the Nanotechnology Research Center at The British University in Egypt to evaluate the surface topography. Assessment of bacterial biofilm formation on dental discs by streptococcus mutans strain (S. mutans) Bacterial inoculum preparation A pure single colony of the reference strain S. (AT ATCC 25175) was used to inoculate 5 ml aliquots in test tubes of brain heart infusion broth supplemented with 2% sucrose. The bacterial cultures were placed in an incubator (Model B 28, BINDER GmbH) at 37 °C for 48 h. The bacterial culture was then adjusted to an optical density (OD) of 0.09 at 600 nm using brain heart infusion broth in the same medium containing 2% sucrose. The concentration of bacteria was determined using a spectrophotometer (Unicam, UK). The denture base samples were then sterilized and inserted separately into 50 ml falcon tubes. Aliquots of 2 ml of adjusted bacterial suspension were pipetted in these falcon tubes for biofilm formation. Then, the discs containing bacterial suspension were incubated for 48 h at 37 °C . After that, the samples were aseptically removed from the cultures using sterile forceps and washed gently three times with 0.9% saline to remove the non-adherent bacteria, then transferred to new falcon tubes containing 5 ml of 0.9% saline. To determine biofilm formation attached to the surface of the samples, the falcon tubes were vortexed with a sonicator (Acculab, USA) at 30 g for 3 min to detach microorganisms from the discs. Then, aliquots of 100 µL of the biofilm suspension were serially diluted up to 10 6 . Dilution was performed in triplicates. After that, 10 µL of each diluted suspension was inoculated on brain heart agar plates and incubated at 37 °C for 48 h (Fig. ). After the incubation, the colony-forming units (CFU) in plates with 30 to 300 typical colonies of S. mutans were counted and then reported in CFU/ml . Statistical analysis Statistical analysis of the obtained data was performed using SPSS for Windows (version 26.0; SPSS Inc., Chicago, IL, USA). Paired sample t-test was conducted to determine the change in surface roughness. An independent sample t-test was used to compare color changes between different artificial teeth materials. One-way ANOVA and Tukey post hoc tests were used to determine the effect of various materials and smoking types on surface roughness and bacterial biofilm formation. Four different denture base materials were used to construct one hundred and twenty disc-shaped samples of 1 cm diameter and 2 mm thickness: conventional heat-cured acrylic resin (CA) (Acrostone, Egypt), flexible acrylic resin (FA) (Valplast, Valplast International Corp, USA), heat-cured acrylic resin reinforced with titanium nanoparticles (TA) (TA nanoparticles ( Nanogate, Egypt), and 3D printed acrylic resin (PA) (Nexdent, The Netherlands), composition of materials are shown in (Table ). Another sixty samples of artificial teeth were used: conventional ready-made acrylic resin teeth (Acrostone, Egypt) and 3D-printed acrylic resin teeth (Nexdent, The Netherlands). The heat-cured acrylic resin groups were constructed using the conventional compression-molding technique with a long curing cycle (74 °C for 8 h followed by 100 °C for 1 h). For the printed groups (PA and 3D printed teeth), CAD software (Exocad, Darmstadt, Germany) was used to design the samples. Then, the printing angle was set at 90 degrees, and the 3D printer (Anycubic, China) was filled with liquid resin (pink for denture base samples and white teeth samples), and the samples were subsequently printed. The denture base samples were used to assess surface roughness and biofilm formation, while the artificial teeth samples were used to determine color change. All groups were divided according to the smoking method into three subgroups: the control group with no smoking exposure (I), the conventional smoking exposure group (II), and the heated tobacco exposure group (III). All samples were stored in artificial saliva at 37 °C for 24 h to simulate the conditions of the oral cavity before any interference. Artificial saliva was obtained by dissolving the following ingredients in one liter of deionized water: Xanthan gum (0.92), KCl (1.2), NaCl (0.85), MgCl2 (0.05), NaH2PO4 (0.13), C8H8O3 (0.13) and CaCl2 (0.13) . The surface roughness of all denture base samples was measured using a profilometer (JITAI8101 Surface Roughness Tester—Beijing Jitai Tech Detection Device Co. Ltd, China) at cut off 0.25 mm, number of cuts 1and range ± 40 μm. In compliance with ISO 11,562 recommendations for standardization, each sample was measured three times at different locations (the middle and sides), and the average was obtained to get the mean surface roughness values (Ra). According to the CIE L*a*b* color order system, the three color parameters of each artificial tooth specimen were measured using a VITA Easyshade spectrophotometer Advance 4.01 (VITA Zahnfabric, Bad Sackingen, Germany) at 3 different areas. Mean measurements were then calculated. The smoking standardizing apparatus, designed and constructed at The Dentistry Research Center, Faculty of Dentistry, The British University in Egypt, was a crucial tool in this study. It was created to simulate the smoking process to investigate the effects of smoking on different dental materials. The apparatus includes a motor with a gearbox to lower its speed to 2 Hz (2 cycles per second), a crankshaft, and a connecting rod attached to a slider to convert the rotational movement into a 4.5 cm-long linear movement. A stainless-steel cylinder with an internal diameter of 12 cm (6 cm radius) with a piston to generate suction power with about 500 ml volume, simulating the tidal volume taken during smoking was designed. A cigarette or electronic smoking device is attached to a valve that allows inhalation of the smoke in one direction only, simulating the mouth. Another valve allows the exhalation in one direction only, simulating the nose. To simulate the oral cavity, a pool of water with a heater linked to a thermal sensor regulates the temperature between 36.5 and 37.5 °C with 100% humidity . The samples were mounted on 2 perforated trays to allow total exposure of all samples to the smoke equally (Fig. ). Conventional cigarettes (LM, Philip Morris International Inc., Egypt) and heated tobacco electronic cigarettes (Heets, Russet selection, Philip Morris International Inc., Italy) were used. The samples were exposed to cigarette smoke of 600 cigarettes/heets, representing 30 days of medium smoker behavior (20 cigarettes per day) . Then, the samples were gently washed with distilled water for 1 min. The control groups were mounted on the perforated trays and were placed in the smoking apparatus. A complete cycle was then performed without smoking. The surface roughness of the denture base samples was performed using the same previous parameters. The color parameters of each artificial tooth sample were measured using the same previous method, and then the color change was calculated according to the following formula: [12pt]{minimal} $$\: E_{2-1}\:=\:([ L]^2\:+\:[ a]^2\:+\:[ b]^2)^{1/2}$$ One sample from each group was examined by scanning electron microscopy (Thermo Fisher (USA) Quattro S Felid Emission Gun, Environmental SEM “FEG ESEM”) at the Nanotechnology Research Center at The British University in Egypt to evaluate the surface topography. Bacterial inoculum preparation A pure single colony of the reference strain S. (AT ATCC 25175) was used to inoculate 5 ml aliquots in test tubes of brain heart infusion broth supplemented with 2% sucrose. The bacterial cultures were placed in an incubator (Model B 28, BINDER GmbH) at 37 °C for 48 h. The bacterial culture was then adjusted to an optical density (OD) of 0.09 at 600 nm using brain heart infusion broth in the same medium containing 2% sucrose. The concentration of bacteria was determined using a spectrophotometer (Unicam, UK). The denture base samples were then sterilized and inserted separately into 50 ml falcon tubes. Aliquots of 2 ml of adjusted bacterial suspension were pipetted in these falcon tubes for biofilm formation. Then, the discs containing bacterial suspension were incubated for 48 h at 37 °C . After that, the samples were aseptically removed from the cultures using sterile forceps and washed gently three times with 0.9% saline to remove the non-adherent bacteria, then transferred to new falcon tubes containing 5 ml of 0.9% saline. To determine biofilm formation attached to the surface of the samples, the falcon tubes were vortexed with a sonicator (Acculab, USA) at 30 g for 3 min to detach microorganisms from the discs. Then, aliquots of 100 µL of the biofilm suspension were serially diluted up to 10 6 . Dilution was performed in triplicates. After that, 10 µL of each diluted suspension was inoculated on brain heart agar plates and incubated at 37 °C for 48 h (Fig. ). After the incubation, the colony-forming units (CFU) in plates with 30 to 300 typical colonies of S. mutans were counted and then reported in CFU/ml . A pure single colony of the reference strain S. (AT ATCC 25175) was used to inoculate 5 ml aliquots in test tubes of brain heart infusion broth supplemented with 2% sucrose. The bacterial cultures were placed in an incubator (Model B 28, BINDER GmbH) at 37 °C for 48 h. The bacterial culture was then adjusted to an optical density (OD) of 0.09 at 600 nm using brain heart infusion broth in the same medium containing 2% sucrose. The concentration of bacteria was determined using a spectrophotometer (Unicam, UK). The denture base samples were then sterilized and inserted separately into 50 ml falcon tubes. Aliquots of 2 ml of adjusted bacterial suspension were pipetted in these falcon tubes for biofilm formation. Then, the discs containing bacterial suspension were incubated for 48 h at 37 °C . After that, the samples were aseptically removed from the cultures using sterile forceps and washed gently three times with 0.9% saline to remove the non-adherent bacteria, then transferred to new falcon tubes containing 5 ml of 0.9% saline. To determine biofilm formation attached to the surface of the samples, the falcon tubes were vortexed with a sonicator (Acculab, USA) at 30 g for 3 min to detach microorganisms from the discs. Then, aliquots of 100 µL of the biofilm suspension were serially diluted up to 10 6 . Dilution was performed in triplicates. After that, 10 µL of each diluted suspension was inoculated on brain heart agar plates and incubated at 37 °C for 48 h (Fig. ). After the incubation, the colony-forming units (CFU) in plates with 30 to 300 typical colonies of S. mutans were counted and then reported in CFU/ml . Statistical analysis of the obtained data was performed using SPSS for Windows (version 26.0; SPSS Inc., Chicago, IL, USA). Paired sample t-test was conducted to determine the change in surface roughness. An independent sample t-test was used to compare color changes between different artificial teeth materials. One-way ANOVA and Tukey post hoc tests were used to determine the effect of various materials and smoking types on surface roughness and bacterial biofilm formation. Figure shows the samples after performing different exposure procedures: I: the control group with no smoking exposure, II: conventional cigarette smoking exposure, and III: heated tobacco exposure. Surface roughness results The control groups did not show a significant increase in surface roughness for all four types of used denture base materials. However, both types of smoking caused a statistically significant increase in surface roughness. The mean surface roughness values before and after exposure are shown in (Table ). Regarding the effect of the type of smoking on change in surface roughness (Δ Ra) of different denture base materials, there was a statistically significant difference between the control and the conventional cigarette smoking subgroups. However, there was no statistically significant difference between the control and the heated tobacco groups. (Table ; Fig. ). Concerning different materials, there was no statistically significant difference between the mean values of Δ Ra of different materials in the control, conventional cigarette smoking, or heated tobacco groups (Table ; Fig. ). The surface topography images of the studied samples at 1000X are presented in (Fig. ). The CS groups showed significant change in the surface topography with increased pitting of the surface compared to the control groups. The change in the topography of the surface of the samples was almost identical for all types of denture base materials. Also, the HT groups presented increased pitting than the control groups but to a lesser amount than that of the CS groups. Bacterial accumulation test Using ANOVA and Tukey as post-hoc tests, it was found that there was a statistically significant difference between all smoking subgroups. In the CA, FA, and PA groups, the heated tobacco subgroup (CA III) showed the highest level of bacterial accumulation, while the control groups showed the least. For the TA group, the heated tobacco subgroup showed the significantly highest level of bacterial accumulation, and there was no difference between the control and the conventional cigarette smoking groups (Table ; Fig. ). In the control subgroup (I), there was a statistically significant difference between all groups. The (FA I) and the (PA I) subgroups showed significantly higher bacterial accumulation than the (CA I) and the (TA I) groups. In the conventional cigarette smoking subgroup (II), there was a statistically significant difference between all groups, with the (CA II) showing the highest significant bacterial accumulation, followed by the (TA II) and (PA II) and the (FA II) showing the least significant. For the heated tobacco subgroup (III), there was a statistically significant difference between all subgroups. The (TA III) showed the highest significant bacterial accumulation, and the (FA III) showed the significantly least bacterial accumulation. There was no statistically significant difference between the (CA III) and (PA III) or the (PA III) and (FA III) (Table ; Fig. ). Color change For both types of teeth, the conventional cigarette smoking groups showed statistically higher significant differences in the mean values of color change (ΔE) than the control and the heated tobacco groups (Table ; Fig. ). Concerning the type of smoking groups, there was no statistically significant difference between the conventional acrylic resin and the 3D-printed groups (Table ; Fig. ). The control groups did not show a significant increase in surface roughness for all four types of used denture base materials. However, both types of smoking caused a statistically significant increase in surface roughness. The mean surface roughness values before and after exposure are shown in (Table ). Regarding the effect of the type of smoking on change in surface roughness (Δ Ra) of different denture base materials, there was a statistically significant difference between the control and the conventional cigarette smoking subgroups. However, there was no statistically significant difference between the control and the heated tobacco groups. (Table ; Fig. ). Concerning different materials, there was no statistically significant difference between the mean values of Δ Ra of different materials in the control, conventional cigarette smoking, or heated tobacco groups (Table ; Fig. ). The surface topography images of the studied samples at 1000X are presented in (Fig. ). The CS groups showed significant change in the surface topography with increased pitting of the surface compared to the control groups. The change in the topography of the surface of the samples was almost identical for all types of denture base materials. Also, the HT groups presented increased pitting than the control groups but to a lesser amount than that of the CS groups. Using ANOVA and Tukey as post-hoc tests, it was found that there was a statistically significant difference between all smoking subgroups. In the CA, FA, and PA groups, the heated tobacco subgroup (CA III) showed the highest level of bacterial accumulation, while the control groups showed the least. For the TA group, the heated tobacco subgroup showed the significantly highest level of bacterial accumulation, and there was no difference between the control and the conventional cigarette smoking groups (Table ; Fig. ). In the control subgroup (I), there was a statistically significant difference between all groups. The (FA I) and the (PA I) subgroups showed significantly higher bacterial accumulation than the (CA I) and the (TA I) groups. In the conventional cigarette smoking subgroup (II), there was a statistically significant difference between all groups, with the (CA II) showing the highest significant bacterial accumulation, followed by the (TA II) and (PA II) and the (FA II) showing the least significant. For the heated tobacco subgroup (III), there was a statistically significant difference between all subgroups. The (TA III) showed the highest significant bacterial accumulation, and the (FA III) showed the significantly least bacterial accumulation. There was no statistically significant difference between the (CA III) and (PA III) or the (PA III) and (FA III) (Table ; Fig. ). For both types of teeth, the conventional cigarette smoking groups showed statistically higher significant differences in the mean values of color change (ΔE) than the control and the heated tobacco groups (Table ; Fig. ). Concerning the type of smoking groups, there was no statistically significant difference between the conventional acrylic resin and the 3D-printed groups (Table ; Fig. ). An in vitro study design was employed to control all the factors and enable accurate data collection. The study evaluated and compared the effect of conventional cigarette smoking and heated tobacco on the surface roughness, bacterial accumulation, and color stability of different denture bases and teeth materials. The null hypothesis of the study were rejected as significant differences were found among different groups in surface roughness, biofilm formation and color change. The results showed that conventional cigarette smoking and heated tobacco significantly increased the surface roughness of different denture base materials. Although conventional smoke increased the surface roughness by a greater degree, this difference was not statistically significant. These results are consistent with previous studies, which state that smoking of all types affects the surface roughness of dental materials and that tobacco consumption of all types is associated with tooth discoloration and changes in the surface properties of dental materials . This finding was supported by the SEM images, which showed that all CS groups had a noticeable increase in the pitting of the acrylic surface. With CS, these changes were attributed to the deposition of cigarette substances on the surface of the acrylic resin . When burning the cigarette, the resultant smoke contains multiple components, such as carbon monoxide, carbon dioxide, nicotine, ammonia, nickel, arsenic, tar, and heavy metals such as lead and cadmium . Another possible explanation may be due to the increase in temperature within the smoking chamber, i.e., the thermal effects of smoking, as reported in a previous study . According to Mathias P et al., the tar of cigarettes contains aromatic hydrocarbons that have a surface-dissolving action on the polymeric materials. Polymeric materials are insoluble in oral fluids but are soluble to some extent in aromatic hydrocarbons . From another point of view, there is a possibility that cigarette smoke will get mixed with saliva, which may produce an acidic pH solution, damaging the surface integrity of the materials . Previous studies have claimed that heated tobacco is a significantly safer smoking option in terms of product release due to the absence of tar, which was identified as a leading cause of increased surface roughness and material discoloration . However, in our study, although the increase in surface roughness after exposure to HT was less than that after CS, this difference was insignificant. This study also showed a significant increase in bacterial biofilm formation on all denture base materials CS and HT, which could be related to surface roughness. The clinical threshold value of surface roughness (Ra) for plaque retention on intraoral materials was 0.2 μm as advocated by Bollen C et al., . In accordance, below this value, no further reduction in plaque accumulation was expected, but above it, a proportional increase occurred . Other studies have previously stated that surface irregularities provide an environment that promotes bacterial colonization and biofilm formation . Surface roughness increases surface area, hydrophobicity, and surface energy, which, in turn, affects the mechanism of the bacteria’s attachment to that surface and its adhesion . The increase in bacterial biofilm formation was more significant in all HT groups than in the CS groups. It was previously claimed by another study that e-smoke promoted the growth of S. mutans, the expression of virulent genes, and the adhesion to and formation of biofilms on tooth surfaces, supporting the increase in bacterial biofilm formation . The increase in surface roughness has long been proven to cause resins to have a rougher surface and a resultant color change. We can see this when we compare resins to dental ceramics, whose highly glazed and polished surfaces result in greater color stability. In comparison, resins are more porous and have a less polished outer surface . A recent study found that 3D printed resins showed inferior mechanical properties and higher water solubility than conventional heat cured acrylic resin, even before external stimuli, which might cause us to expect a significant difference between the two materials when exposed to smoking, however this was not the case with our study, where the difference between the 2 materials was statistically insignificant . Spectrophotometers often report color using the CIELAB color system, representing the international standard for color measurement. It is currently one of the most popular and widely used color spaces. It is well suited for determining minor color differences . ΔE values less than 1 were regarded as undetectable by the human eye. Color differences of 3.3 > ΔE > 1 may be detectable by a skilled operator but were considered clinically acceptable. On the other hand, values of ΔE > 3.3 would be detectable by a nonskilled person and, therefore, considered clinically unacceptable . In this study, all groups except the conventional acrylic resin artificial teeth showed ΔE > 3.3. Both heated tobacco and CS caused a significant color change in 3D-printed teeth. This coincides with another study whose results found that the most remarkable changes in surface roughness were observed in their 3D-printed samples, followed by the heat-polymerized samples, and that these changes can alter their translucency and opacity, thus affecting their color. In contrast, Alfouzan et al. studied and compared the color stability of 3D-printed and conventional heat-polymerized acrylic resins in general following aging, mechanical brushing, and immersion in a staining medium, and found that color changes of 3D-printed denture resins were low compared to conventional heat polymerized resin . Flexible resin, on the other hand, was found by another study to be the least staining denture base material as compared to conventional heat cured acrylic resin . The CS caused a significant color change in conventional acrylic resin and 3D-printed artificial teeth compared to the heated tobacco groups, which could be attributed to the latter’s absence of tar. Several studies report that cigarette smoking affects the color of natural teeth and dental materials, including denture teeth . The results of this study were consistent with those of Mathias P et al., Wasilewski MA, et al., and Mathias P, et al., . These studies evaluated the effect of tobacco smoke on the color of composites. A slight color change occurred in all control group samples, which was relative to baseline; this was assumed to be due to the thermal effect of the immersion temperature and due to water absorption and due to mucin, which is one of the components of the artificial saliva . According to Craig , polymeric teeth are insoluble in oral fluids, but they are soluble to some extent in aromatic hydrocarbons. According to Mathias P, et al., tar contains aromatic hydrocarbons. So, it was deduced from this study that such surface-dissolving substances might be causative factors of discoloration. Also, there was a possibility that the cigarette smoke mixed with saliva may have produced an acidic pH solution, which might have damaged the surface integrity of the materials, thus creating favorable conditions for discoloration . Despite this study’s limitations, we have concluded that conventional cigarette smoking and heated tobacco affect the surface roughness, bacterial biofilm formation, and color changes of dental materials. |
Molecular and immunophenotypic characterization of | e67113c0-8249-45d7-9495-384d0268e05a | 9708576 | Anatomy[mh] | The Switch/Sucrose-Non-Fermentable (SWI/SNF) complex, also known as BRG1/BRM- associated factor (BAF) complex, is involved in chromatin remodeling and transcriptional regulation, therefore contributing to cell differentiation and cell proliferation processes . Over the past years, multiple SWI/SNF sub complexes and subunits have been discovered and their role in oncogenic processes described. The most relevant ones currently discussed and studied are SMARCB1 (BAF47, INI1 or SNF5), SMARCA4 (BRG1 or BAF190A), SMARCA2 (BRM or BAF190B), ARID1A (BAF250A or SMARCF1) and PBRM1 (BAF180). Nuclear INI1 ( SMARCB1 ) is highly conserved and ubiquitously expressed in normal cells . A morphological correlate associated with SMARCB1 genomic alterations and immunohistochemical loss is the so-called ‘rhabdoid phenotype’, defined as the presence of eosinophilic cytoplasmic condensation adjacent to the nucleus. Loss of nuclear INI1 ( SMARCB1 ) protein expression usually results from biallelic inactivation caused by different types of epigenetic or other deleterious genetic errors . Complete loss of INI1 ( SMARCB1 ) expression has been linked to a number of pediatric and adult mesenchymal tumors. The prototypical example is inactivation of SMARCB1 (INI1) in pediatric malignant rhabdoid tumors (MRTs) and epithelioid sarcomas. SMARCB1 deficient rhabdoid tumors are among the most aggressive and lethal pediatric cancers, however, mutations in SMARCB1 also form the etiological basis of familial schwannomatosis, which is characterized by a predisposition to benign tumors . Rare SMARCB1 deficient tumors, more commonly occurring in adult patients, include synovial sarcomas, epithelioid malignant peripheral nerve sheath tumors, myoepithelial carcinomas, extraskeletal myxoid chondrosarcomas, chordomas, gastrointestinal stromal tumors (GIST) and ossifying fibromyxoid tumors. Intrathoracic tumors associated with SMARCB1 inactivation are exceedingly rare, but should be suspected when dealing with tumors arising in the soft tissue of the chest wall. The most frequent SWI/SNF complex subunit alteration in thorax, lung, pleural and mediastinal neoplasms is SMARCA4 . SMARCA4 - deficient undifferentiated tumors were recently recognized as a new entity in the WHO classification of thoracic tumors and are defined as malignant neoplasms with an undifferentiated or rhabdoid phenotype and deficiency of SMARCA4 (BRG1) . These tumors show molecular overlap with smoking-associated NSCLC harboring driver alterations in STK11 , KRAS , and/or KEAP1 , . Therefore, this entity is now termed SMARCA4 deficient undifferentiated tumor rather than sarcoma and must be distinguished from SMARCA4 deficient NSCLC, since SMARCA4 mutations and/or loss of BRG1 expression occur in a subset of TTF1/p40 negative tumors, accounting for ~10% of poorly differentiated lung adenocarcinomas , . This type of carcinoma typically affects smoking men, and is associated with a short overall survival rate, regardless of disease stage . Due to its aggressive behavior the identification of the inactivation and loss of BRG1 ( SMARCA4) has become of interest for lung cancer management as recent preclinical studies have discussed therapeutic vulnerabilities that may overcome the inherently aggressive biology of SMARCA4 - deficient NSCLCs – . SMARCB1 deficiency has been mainly described in mesenchymal tumors, but next-generation sequencing studies have subsequently shown that SMARCB1 alterations are also found in a subset of carcinomas, although at low frequency. They are often regarded as passenger events or second hits acquired at a later stage in tumorigenesis, as opposed to their initiating/driving role in MRT or epithelioid sarcoma. SMARCB1 deficiency has been described in carcinomas of the gastro-entero-pancreatic tract, the head and neck region and in neoplasms of the genitourinary tract, representing a broad histomorphologic spectrum and polyphenotypic variations . In short, alterations of SWI/SNF complex subunits, especially SMARCB1 and SMARCA4 , correlate commonly with distinct pathological features such as solid syncytial architecture, monotonous vesicular nuclei dotted with conspicuous nucleoli and/or rhabdoid cytoplasmic inclusions. Nonetheless, SWI/SNF complex subunit alterations are observed in both epithelial and mesenchymal tumors, both benign and malignant. Their recognition in routine pathology practice is challenging but possible, paving the way to targeted therapies . In this study, we molecularly and immunophenotypically characterized nine SMARCB1 (INI1) deficient intrathoracic neoplasms and correlated this information with clinical presentation and outcome. We encountered four intrathoracic neoplasms with immunohistochemical INI1 ( SMARCB1) deficiency in our diagnostic routine and consultation service. Each case has been identified as challenging from a diagnostic standpoint. Through a retrospective archival review using immunohistochemistry on TMAs and our in–house sarcoma database we found five additional cases.
Patient cohort Four intrathoracic neoplasms with INI1 ( SMARCB1) deficiency were identified within our surgical and molecular pathology service including two consultation cases. Through a retrospective review of thoracic tumors in general from our institutional archive, additional five cases were identified. Three of these were found by immunohistochemical screening of thirteen TMAs by INI1 ( SMARB1 ) immunohistochemistry, consisting of in total 1479 cases including pleural mesothelioma, non-small cell lung cancer (NSCLC), small cell lung cancer (SCLC), neuroendocrine tumors and large cell carcinoma (Supplementary Table ). The remaining two cases were identified within our in-house sarcoma database. For all identified cases, the original diagnoses are included in Table . Morphologic features were assessed on hematoxylin and eosin-stained slides. Clinical information was obtained from the hospitals electronic medical records. All analyses were performed in our clinical laboratories. The study was approved by the local ethics committee (BASEC-2021-00417) and was conducted in accordance with local laws and regulations, including patients who signed our institution’s general informed consent. Immunohistochemistry Immunohistochemistry was performed on 2 µm thick deparaffinized, rehydrated sections obtained from archived, paraffin-embedded blocks from each patient case using antibody-specific epitope retrieval techniques. Using an automated system for detection of the following primary antigens: INI1 ( SMARCB1 ) (BD Biosciences, clone 25/BAF47, 1:300) and BRG1 ( SMARCA4 ) (Abcam, clone EPNCIR111A, 1:50), pan cytokeratin (Dako, clone AE1/AE3,1:50), claudin4 (Invitrogen, clone 3E2C1, 1:200), BRM ( SMARCA2 ) (Cell Signaling Technology, clone D9E8B, 1:800), TTF1 (Ventana-Roch, clone SP141, prediluted), CD34 (Ventana-Roche, clone QBEnd/10), Calretinin (Ventana-Roche, clone SP65, prediluted), Synaptophysin (Novacastra, clone 27G12, 1:50). For INI1, BRG1 and BRM, the reactivity was considered “deficient” if complete absence of nuclear staining in the background of intact positive controls (e.g. lymphocytes) was seen. Further immunohistochemical data was complemented or retrieved from the original pathology report. (Supplementary Table ). Molecular analysis Genomic profiling 8-μm thick unstained sections (5–10 slides each) of FFPE material were cut and macro-dissected according to corresponding hematoxylin and eosin slides to enrich specimens for tumor cells according to clinical protocols. Overnight proteinase K buffer digestion was followed by purification with the Maxwell ® RSC DNA FFPE Kit, PN: AS1450 kit. Double-stranded DNA was quantified by a Picogreen fluorescence assay using the provided lambda DNA standards (Invitrogen). 200 ng of dsDNA was fragmented to 50–1000 bp by sonication using the Covaris system prior purification using AMPure XP Beads (Beckman Coulter). Targeted Next-Generation Sequencing (NGS) was performed with the FDA -approved, broad comprehensive molecular diagnostic test FoundationOne®CDx assay (Foundation Medicine Inc., Cambridge, MA, USA). The assay sequences the complete exons of 324 cancer-related genes for the detection substitutions, insertion and deletion alterations (delins), and copy number alterations (CNAs) in 324 genes and select gene rearrangements, as well as genomic signatures including microsatellite instability (MSI), tumor mutational burden (TMB) and loss of heterozygosity score (LOH). Selected introns and promoter regions of 236 genes are also sequenced for the detection of gene rearrangements and fusions. For the oncoprint generation, variants of unknown significance (VUS) were excluded and only significant variants and copy number alterations were included. Plots were generated using the maftools package in R. Molecular signatures were generated using the R package MutationalPatterns from the somatic gene variants obtained with high quality. Methylation profiling DNA was extracted from fresh frozen tumor sample (2/9) using the Promega Maxwell® RSC Tissue DNA Kit, PN: AS1610. If no fresh frozen tissue was available, FFPE material (see genomic profiling) was used utilizing the same DNA Extraction kit. 500 ng of genomic DNA from each sample was subjected to bisulfite conversion using an accredited in-house assay. The Infinium Human Methylation EPIC array was used to obtain genome wide DNA methylation profiles according to the manufacturer’s instructions (Illumina, USA). The quality of each sample was checked using the on-chip quality metrics and the R package minfi version 1.40 . IDAT files for all nine samples were uploaded to the DKFZ Sarcoma Classifier (version 12) ( www.molecularsarcomapathology.org ). All classifier results consisted of a suggested methylation class with an accompanying calibrated score. The calibrated score is a probability of the confidence for the given methylation class assignment. As defined by Koelsche et al , the classifier was only deemed to have made a successful prediction if the sample obtained a calibrated score of 0.9 or higher . We further used the Epigenomic Digital Pathology (EpiDiP) platform ( www.epidip.org ) hosted from the Department of Pathology at the University Hospital Basel, Switzerland. All IDAT files were uploaded to EpiDiP. Conclusions made were based on UMAP results and copy number plots, also generated in EpiDiP.
Four intrathoracic neoplasms with INI1 ( SMARCB1) deficiency were identified within our surgical and molecular pathology service including two consultation cases. Through a retrospective review of thoracic tumors in general from our institutional archive, additional five cases were identified. Three of these were found by immunohistochemical screening of thirteen TMAs by INI1 ( SMARB1 ) immunohistochemistry, consisting of in total 1479 cases including pleural mesothelioma, non-small cell lung cancer (NSCLC), small cell lung cancer (SCLC), neuroendocrine tumors and large cell carcinoma (Supplementary Table ). The remaining two cases were identified within our in-house sarcoma database. For all identified cases, the original diagnoses are included in Table . Morphologic features were assessed on hematoxylin and eosin-stained slides. Clinical information was obtained from the hospitals electronic medical records. All analyses were performed in our clinical laboratories. The study was approved by the local ethics committee (BASEC-2021-00417) and was conducted in accordance with local laws and regulations, including patients who signed our institution’s general informed consent.
Immunohistochemistry was performed on 2 µm thick deparaffinized, rehydrated sections obtained from archived, paraffin-embedded blocks from each patient case using antibody-specific epitope retrieval techniques. Using an automated system for detection of the following primary antigens: INI1 ( SMARCB1 ) (BD Biosciences, clone 25/BAF47, 1:300) and BRG1 ( SMARCA4 ) (Abcam, clone EPNCIR111A, 1:50), pan cytokeratin (Dako, clone AE1/AE3,1:50), claudin4 (Invitrogen, clone 3E2C1, 1:200), BRM ( SMARCA2 ) (Cell Signaling Technology, clone D9E8B, 1:800), TTF1 (Ventana-Roch, clone SP141, prediluted), CD34 (Ventana-Roche, clone QBEnd/10), Calretinin (Ventana-Roche, clone SP65, prediluted), Synaptophysin (Novacastra, clone 27G12, 1:50). For INI1, BRG1 and BRM, the reactivity was considered “deficient” if complete absence of nuclear staining in the background of intact positive controls (e.g. lymphocytes) was seen. Further immunohistochemical data was complemented or retrieved from the original pathology report. (Supplementary Table ).
Genomic profiling 8-μm thick unstained sections (5–10 slides each) of FFPE material were cut and macro-dissected according to corresponding hematoxylin and eosin slides to enrich specimens for tumor cells according to clinical protocols. Overnight proteinase K buffer digestion was followed by purification with the Maxwell ® RSC DNA FFPE Kit, PN: AS1450 kit. Double-stranded DNA was quantified by a Picogreen fluorescence assay using the provided lambda DNA standards (Invitrogen). 200 ng of dsDNA was fragmented to 50–1000 bp by sonication using the Covaris system prior purification using AMPure XP Beads (Beckman Coulter). Targeted Next-Generation Sequencing (NGS) was performed with the FDA -approved, broad comprehensive molecular diagnostic test FoundationOne®CDx assay (Foundation Medicine Inc., Cambridge, MA, USA). The assay sequences the complete exons of 324 cancer-related genes for the detection substitutions, insertion and deletion alterations (delins), and copy number alterations (CNAs) in 324 genes and select gene rearrangements, as well as genomic signatures including microsatellite instability (MSI), tumor mutational burden (TMB) and loss of heterozygosity score (LOH). Selected introns and promoter regions of 236 genes are also sequenced for the detection of gene rearrangements and fusions. For the oncoprint generation, variants of unknown significance (VUS) were excluded and only significant variants and copy number alterations were included. Plots were generated using the maftools package in R. Molecular signatures were generated using the R package MutationalPatterns from the somatic gene variants obtained with high quality. Methylation profiling DNA was extracted from fresh frozen tumor sample (2/9) using the Promega Maxwell® RSC Tissue DNA Kit, PN: AS1610. If no fresh frozen tissue was available, FFPE material (see genomic profiling) was used utilizing the same DNA Extraction kit. 500 ng of genomic DNA from each sample was subjected to bisulfite conversion using an accredited in-house assay. The Infinium Human Methylation EPIC array was used to obtain genome wide DNA methylation profiles according to the manufacturer’s instructions (Illumina, USA). The quality of each sample was checked using the on-chip quality metrics and the R package minfi version 1.40 . IDAT files for all nine samples were uploaded to the DKFZ Sarcoma Classifier (version 12) ( www.molecularsarcomapathology.org ). All classifier results consisted of a suggested methylation class with an accompanying calibrated score. The calibrated score is a probability of the confidence for the given methylation class assignment. As defined by Koelsche et al , the classifier was only deemed to have made a successful prediction if the sample obtained a calibrated score of 0.9 or higher . We further used the Epigenomic Digital Pathology (EpiDiP) platform ( www.epidip.org ) hosted from the Department of Pathology at the University Hospital Basel, Switzerland. All IDAT files were uploaded to EpiDiP. Conclusions made were based on UMAP results and copy number plots, also generated in EpiDiP.
8-μm thick unstained sections (5–10 slides each) of FFPE material were cut and macro-dissected according to corresponding hematoxylin and eosin slides to enrich specimens for tumor cells according to clinical protocols. Overnight proteinase K buffer digestion was followed by purification with the Maxwell ® RSC DNA FFPE Kit, PN: AS1450 kit. Double-stranded DNA was quantified by a Picogreen fluorescence assay using the provided lambda DNA standards (Invitrogen). 200 ng of dsDNA was fragmented to 50–1000 bp by sonication using the Covaris system prior purification using AMPure XP Beads (Beckman Coulter). Targeted Next-Generation Sequencing (NGS) was performed with the FDA -approved, broad comprehensive molecular diagnostic test FoundationOne®CDx assay (Foundation Medicine Inc., Cambridge, MA, USA). The assay sequences the complete exons of 324 cancer-related genes for the detection substitutions, insertion and deletion alterations (delins), and copy number alterations (CNAs) in 324 genes and select gene rearrangements, as well as genomic signatures including microsatellite instability (MSI), tumor mutational burden (TMB) and loss of heterozygosity score (LOH). Selected introns and promoter regions of 236 genes are also sequenced for the detection of gene rearrangements and fusions. For the oncoprint generation, variants of unknown significance (VUS) were excluded and only significant variants and copy number alterations were included. Plots were generated using the maftools package in R. Molecular signatures were generated using the R package MutationalPatterns from the somatic gene variants obtained with high quality.
DNA was extracted from fresh frozen tumor sample (2/9) using the Promega Maxwell® RSC Tissue DNA Kit, PN: AS1610. If no fresh frozen tissue was available, FFPE material (see genomic profiling) was used utilizing the same DNA Extraction kit. 500 ng of genomic DNA from each sample was subjected to bisulfite conversion using an accredited in-house assay. The Infinium Human Methylation EPIC array was used to obtain genome wide DNA methylation profiles according to the manufacturer’s instructions (Illumina, USA). The quality of each sample was checked using the on-chip quality metrics and the R package minfi version 1.40 . IDAT files for all nine samples were uploaded to the DKFZ Sarcoma Classifier (version 12) ( www.molecularsarcomapathology.org ). All classifier results consisted of a suggested methylation class with an accompanying calibrated score. The calibrated score is a probability of the confidence for the given methylation class assignment. As defined by Koelsche et al , the classifier was only deemed to have made a successful prediction if the sample obtained a calibrated score of 0.9 or higher . We further used the Epigenomic Digital Pathology (EpiDiP) platform ( www.epidip.org ) hosted from the Department of Pathology at the University Hospital Basel, Switzerland. All IDAT files were uploaded to EpiDiP. Conclusions made were based on UMAP results and copy number plots, also generated in EpiDiP.
Patient cohort and clinical characteristics In total, we identified nine intrathoracic neoplasms with immunohistochemical loss of INI1 ( SMARCB1 ) (Supplementary Table ), of which all were initially classified as intrathoracic neoplasms from either the lung ( N = 3), lung/pleura ( N = 1), pleura ( N = 2), pleura/thoracic wall ( N = 1) or mediastinum ( N = 2). Patient information, clinical characteristics and original diagnoses are listed in Table . In our cohort, immunohistochemical INI1 ( SMARCB1 ) deficient thoracic neoplasms occurred in one woman and eight men, ranging from 20 to 76 years of age (mean 57 years) at disease presentation. Smoking history was reported in six patients, ranging from 10 to 80 pack years (mean 34 pack years). For one patient smoking status was not available (patient 5) and 2 patients were documented as never-smokers. Based on computer tomography scans in eight patients, the tumors presented as lung ( N = 3), lung/pleura ( N = 1), pleura ( N = 2), pleura/thoracic wall ( N = 1) or mediastinal ( N = 2) masses (Fig. ). In one case (patient 2) the pleural mass further involved the thoracic wall and axilla, this patient presented with extensive repetitive pleural effusions positive for malignancy. The tumor sizes at presentation ranged from 3.5 to 15.8 cm (mean 7 cm). Imaging and clinical follow-up information were obtained for all nine patients, with a median follow-up of 14 months (range 1–53 months). All nine patients died of progressive disease, one patient (patient 5) was lost to follow-up due to an early transfer to another hospital (Table ). Metastatic disease at presentation occurred in five patients, with four having lung or pleural involvement only. Two patients showed distant metastases in bone, soft tissue, liver and adrenal glands (patient 1 and 6). No brain metastases were documented in any of the nine patients. For therapy, the chemotherapy regimens varied, reflecting the original heterogeneous diagnoses of carcinoma, mesothelioma and sarcoma. The original diagnoses were mostly made in concordance with their particular anatomic site, all were, however, interpreted as high-grade malignancies (Table ). Patient 1 initially presented in an outside hospital and was diagnosed with a thymic squamous cell carcinoma. The case was referred to us for molecular testing, which revealed a SMARCB1 (INI1) homozygous loss that was accompanied by INI1 ( SMARCB1 ) - loss in the complementary immunohistochemistry. Following re-evaluation by a soft tissue pathologist, the diagnosis of an epithelioid sarcoma, proximal type was concluded. The patient died of progressive disease 16 months after the initial presentation. Patient 4 died of localized disease within one month of initial presentation. Patient 9 was difficult to interpret as from a clinical presentation the tumor was located in the posterior mediastinum, growing almost circumferential around the esophagus. The patient passed away 4 months after initial presentation. Additionally, there were two cases (patient 4 and 7) with mainly in the pleural located neoplasms, none of which has a history of known asbestos exposition. Morphological features Histologically, tumors harboring alterations in the SWI/SNF complexes usually show morphologic overlaps such as an epithelioid to rhabdoid morphology. Eight out of the nine tumors showed diffuse sheets of discohesive cells, of which two cases showed pure rhabdoid morphology with cells harboring distinctive hyaline cytoplasmic inclusions, and undifferentiated round to plasmacytoid cells with compressed crescent-shaped peripheral nuclei (patient 3 and 4). Four cases showed pure epithelioid morphology containing cells with abundant eosinophilic cytoplasm and enlarged vesicular nuclei with prominent nucleoli (patients 1, 2, 5, and 7). Overall, the tumor cells were relatively monotonous, with focally moderate pleomorphism seen in two cases, including scattered tumor giant cells. Two cases contained areas of mixed rhabdoid and epithelioid patterns (patient 6 and 9). Only one case (patient 8) showed a carcinoma-like solid growth pattern with cellular cohesion. Based on this we stratified the tumors in morphologic sub-groups: epithelioid, rhabdoid, mixed and solid . Patient 8 (solid morphology) suffered from an intrapulmonary mass containing large tumor cells with abundant cytoplasm and nuclear pleomorphism with variably coarse chromatin and positivity for synaptophysin, which is why it was initially classified as large cell neuroendocrine carcinoma (LCNEC) of the lung. Brisk mitotic activity and extensive necrosis were seen in all nine cases. None of the tumors demonstrated clear evidence of differentiation in the form of gland formation, keratinization or papillary structure. Representative hematoxylin and eosin images together with immunohistochemical stainings are shown for the morphologic sub-groups in Fig. . Immunohistochemical Features The immunohistochemical findings are summarized in Table and grouped according to the morphologic sub-group. As per inclusion criteria, all tumors showed complete loss of protein expression for INI1 ( SMARCB1 ) with positive internal controls e.g. lymphocytes (Fig. ). SWI/SNF complex subunit BRG1 ( SMARCA4 ) was retained in all cases, whereas concomitant BRM ( SMARCA2 ) loss was seen in four cases (patients 1, 4, 5 and 9) (Table ). Pan-cytokeratin (AE1/AE3) expression was seen in half of the epithelioid (2/4) and all of the mixed (2/2) and solid (1/1) cases. In the majority of cases, strong and diffuse membranous pan-cytokeratin staining was seen, with some cases that demonstrated strong and diffuse cytoplasmic staining. Claudin4 was positive in the mixed (2/2) and solid (1/1) cases, concomitant with the pan-cytokeratin expression. The only case expressing TTF1, was an intrapulmonary tumor which has been negative for pan-cytokeratin (patient 5). One marker for neuroendocrine differentiation, synaptophysin, was positive in two cases (patient 5 and 8), in a pan-cytokeratin negative tumor with epithelioid morphology and a pan-cytokeratin negative tumor with solid morphology. Another two cases, both with a predominant pleural tumor mass (patient 4 and 7), were initially classified as mesothelioma, although all mesothelial markers such as calretinin, WT1, D2-40 and CK5/6 tested negative and BAP-1 and MTAP were retained (Table and Supplementary Table ). Based on methylation array data, which revealed closely related tumor methylation classes, further immunohistochemical markers were performed for individual cases and are listed for review in Supplementary Table . Molecular analysis Genomic panel testing in all nine cases was performed using the FoundationOne®CDx Assays (Fig. ). Homozygous SMARCB1 loss was observed in six cases and SMARCB1 mutations in two tumors (patient 3 and 5). In one case, patient 2, we detected within the target range from chr22:24176586 to chr22:24176715 on hg19 in SMARCB1 (NM_003073), a shift below the median copy number range. However, as this target is relatively close to CN = 1, it did not meet the full criteria of a homozygous loss computationally. The target comprises a region downstream of exon 9 in the SMARCB1 gene and appears to be entirely comprised of the untranslated gene region. It is well conceivable that this single target represents a single copy loss of one allele of SMARCB1 thus leading to loss of immunohistochemical protein expression. All tumors were microsatellite stable and showed a tumor mutational burden <7 mut/Mb. In eight tumors, we detected a loss of heterozygosity (LOH) score below 1.5%, while one tumor presented a LOH score of 26.5% (patient 6). All detailed results, including the variants of unknown significance (VUS) are shown in Supplementary Table . Genomic signature analysis revealed absence of smoking signature in six patients (smokers and non-smokers) and interestingly only low contribution in 3 patients with known smoking history and more than 30 pack-years (patient 4, 6, and 8, Supplementary Fig. ). In none of the patients we found mutations such as in KRAS, KEAP1 and STK11 that are usually present and typical of smoking-related NSCLC. Also, no other common lung adenocarcinoma driver mutations such as EGFR or ALK were detected. All nine samples were subsequently submitted for DNA methylation profiling using the Infinium Human Methylation EPIC Bead Chip array for analysis applying the sarcoma classifier and copy number analysis (Table and Fig. ). We excluded one sample (Patient 3) from further analysis due to poor DNA quality and CNV plots of three additional patient (patient 1, 6 and 8) were also deemed not evaluable due to low quality. We attempted a tumor classification by using the DKFZ Sarcoma Classifier platform version 12 ( www.molecularsarcomapathology.org ) and the EpiDiP server ( www.epidip.org ) hosted from the University Hospital Basel, Switzerland. Using the DKFZ sarcoma classifier no successful prediction (calibrated score >0.9 score) could be established for any of the nine cases. This calibrated score indicates the probability of the confidence for the given methylation class assignment. Our cases had a median score of 0.6 (0.4–0.89) excluding two samples that failed to provide a score. However, seven of the nine tumors were classified as closely related tumor methylation classes such as for example epithelioid sarcoma. The calibration scores and related methylation classes are provided in Table . Copy number profiles are provided in Supplementary Fig. . None of the cases showed methylated CpGs in the putative promotor region of the SMARCB1 gene, in line with earlier reports . In patient 4, the only detected and driving alteration was a homozygous loss of SMARCB1 (INI1) what is reflected in the related methylation class of a malignant rhabdoid tumor. This patient’s tumor showed a slight smoking signature but has no smoking related alterations detected otherwise. In five other patients (patients 1, 2, 3, 5, 7) SMARCB1 (INI1) alterations (loss and mutations) are accompanied by the loss or inactivation of genes with a tumor suppressive role such as CDKN2A , ATM, NF2 and PARK2 representing a complex genotypes such as for example seen in epithelioid sarcoma . These tumors were wild type for the TP53 gen as more commonly described in epithelioid sarcoma. Methylation analysis of these 5 tumors showed an enriched relation to known SMARCB1 (INI1) driven entities. In patients 6, 8 and 9 we had concomitant SMARCB1 (INI1) loss and a TP53 missense mutation together with the inactivation of other tumor suppressors such as CDKN2A and RB1 . Interestingly, patient 9 related closest to the group of epithelioid sarcoma despite harboring a TP53 mutation. A principal component analysis (PCA) showed clustering of some patients (patient 4 + 9, 5 + 7 and 6 + 8). Interestingly, patient 6 and 8 seem to cluster separately from the rest of the cohort (Fig. ). These two patients harbor the highest mutational load with alterations in TP53 and RB1 and show a smoking related signature.
In total, we identified nine intrathoracic neoplasms with immunohistochemical loss of INI1 ( SMARCB1 ) (Supplementary Table ), of which all were initially classified as intrathoracic neoplasms from either the lung ( N = 3), lung/pleura ( N = 1), pleura ( N = 2), pleura/thoracic wall ( N = 1) or mediastinum ( N = 2). Patient information, clinical characteristics and original diagnoses are listed in Table . In our cohort, immunohistochemical INI1 ( SMARCB1 ) deficient thoracic neoplasms occurred in one woman and eight men, ranging from 20 to 76 years of age (mean 57 years) at disease presentation. Smoking history was reported in six patients, ranging from 10 to 80 pack years (mean 34 pack years). For one patient smoking status was not available (patient 5) and 2 patients were documented as never-smokers. Based on computer tomography scans in eight patients, the tumors presented as lung ( N = 3), lung/pleura ( N = 1), pleura ( N = 2), pleura/thoracic wall ( N = 1) or mediastinal ( N = 2) masses (Fig. ). In one case (patient 2) the pleural mass further involved the thoracic wall and axilla, this patient presented with extensive repetitive pleural effusions positive for malignancy. The tumor sizes at presentation ranged from 3.5 to 15.8 cm (mean 7 cm). Imaging and clinical follow-up information were obtained for all nine patients, with a median follow-up of 14 months (range 1–53 months). All nine patients died of progressive disease, one patient (patient 5) was lost to follow-up due to an early transfer to another hospital (Table ). Metastatic disease at presentation occurred in five patients, with four having lung or pleural involvement only. Two patients showed distant metastases in bone, soft tissue, liver and adrenal glands (patient 1 and 6). No brain metastases were documented in any of the nine patients. For therapy, the chemotherapy regimens varied, reflecting the original heterogeneous diagnoses of carcinoma, mesothelioma and sarcoma. The original diagnoses were mostly made in concordance with their particular anatomic site, all were, however, interpreted as high-grade malignancies (Table ). Patient 1 initially presented in an outside hospital and was diagnosed with a thymic squamous cell carcinoma. The case was referred to us for molecular testing, which revealed a SMARCB1 (INI1) homozygous loss that was accompanied by INI1 ( SMARCB1 ) - loss in the complementary immunohistochemistry. Following re-evaluation by a soft tissue pathologist, the diagnosis of an epithelioid sarcoma, proximal type was concluded. The patient died of progressive disease 16 months after the initial presentation. Patient 4 died of localized disease within one month of initial presentation. Patient 9 was difficult to interpret as from a clinical presentation the tumor was located in the posterior mediastinum, growing almost circumferential around the esophagus. The patient passed away 4 months after initial presentation. Additionally, there were two cases (patient 4 and 7) with mainly in the pleural located neoplasms, none of which has a history of known asbestos exposition.
Histologically, tumors harboring alterations in the SWI/SNF complexes usually show morphologic overlaps such as an epithelioid to rhabdoid morphology. Eight out of the nine tumors showed diffuse sheets of discohesive cells, of which two cases showed pure rhabdoid morphology with cells harboring distinctive hyaline cytoplasmic inclusions, and undifferentiated round to plasmacytoid cells with compressed crescent-shaped peripheral nuclei (patient 3 and 4). Four cases showed pure epithelioid morphology containing cells with abundant eosinophilic cytoplasm and enlarged vesicular nuclei with prominent nucleoli (patients 1, 2, 5, and 7). Overall, the tumor cells were relatively monotonous, with focally moderate pleomorphism seen in two cases, including scattered tumor giant cells. Two cases contained areas of mixed rhabdoid and epithelioid patterns (patient 6 and 9). Only one case (patient 8) showed a carcinoma-like solid growth pattern with cellular cohesion. Based on this we stratified the tumors in morphologic sub-groups: epithelioid, rhabdoid, mixed and solid . Patient 8 (solid morphology) suffered from an intrapulmonary mass containing large tumor cells with abundant cytoplasm and nuclear pleomorphism with variably coarse chromatin and positivity for synaptophysin, which is why it was initially classified as large cell neuroendocrine carcinoma (LCNEC) of the lung. Brisk mitotic activity and extensive necrosis were seen in all nine cases. None of the tumors demonstrated clear evidence of differentiation in the form of gland formation, keratinization or papillary structure. Representative hematoxylin and eosin images together with immunohistochemical stainings are shown for the morphologic sub-groups in Fig. .
The immunohistochemical findings are summarized in Table and grouped according to the morphologic sub-group. As per inclusion criteria, all tumors showed complete loss of protein expression for INI1 ( SMARCB1 ) with positive internal controls e.g. lymphocytes (Fig. ). SWI/SNF complex subunit BRG1 ( SMARCA4 ) was retained in all cases, whereas concomitant BRM ( SMARCA2 ) loss was seen in four cases (patients 1, 4, 5 and 9) (Table ). Pan-cytokeratin (AE1/AE3) expression was seen in half of the epithelioid (2/4) and all of the mixed (2/2) and solid (1/1) cases. In the majority of cases, strong and diffuse membranous pan-cytokeratin staining was seen, with some cases that demonstrated strong and diffuse cytoplasmic staining. Claudin4 was positive in the mixed (2/2) and solid (1/1) cases, concomitant with the pan-cytokeratin expression. The only case expressing TTF1, was an intrapulmonary tumor which has been negative for pan-cytokeratin (patient 5). One marker for neuroendocrine differentiation, synaptophysin, was positive in two cases (patient 5 and 8), in a pan-cytokeratin negative tumor with epithelioid morphology and a pan-cytokeratin negative tumor with solid morphology. Another two cases, both with a predominant pleural tumor mass (patient 4 and 7), were initially classified as mesothelioma, although all mesothelial markers such as calretinin, WT1, D2-40 and CK5/6 tested negative and BAP-1 and MTAP were retained (Table and Supplementary Table ). Based on methylation array data, which revealed closely related tumor methylation classes, further immunohistochemical markers were performed for individual cases and are listed for review in Supplementary Table .
Genomic panel testing in all nine cases was performed using the FoundationOne®CDx Assays (Fig. ). Homozygous SMARCB1 loss was observed in six cases and SMARCB1 mutations in two tumors (patient 3 and 5). In one case, patient 2, we detected within the target range from chr22:24176586 to chr22:24176715 on hg19 in SMARCB1 (NM_003073), a shift below the median copy number range. However, as this target is relatively close to CN = 1, it did not meet the full criteria of a homozygous loss computationally. The target comprises a region downstream of exon 9 in the SMARCB1 gene and appears to be entirely comprised of the untranslated gene region. It is well conceivable that this single target represents a single copy loss of one allele of SMARCB1 thus leading to loss of immunohistochemical protein expression. All tumors were microsatellite stable and showed a tumor mutational burden <7 mut/Mb. In eight tumors, we detected a loss of heterozygosity (LOH) score below 1.5%, while one tumor presented a LOH score of 26.5% (patient 6). All detailed results, including the variants of unknown significance (VUS) are shown in Supplementary Table . Genomic signature analysis revealed absence of smoking signature in six patients (smokers and non-smokers) and interestingly only low contribution in 3 patients with known smoking history and more than 30 pack-years (patient 4, 6, and 8, Supplementary Fig. ). In none of the patients we found mutations such as in KRAS, KEAP1 and STK11 that are usually present and typical of smoking-related NSCLC. Also, no other common lung adenocarcinoma driver mutations such as EGFR or ALK were detected. All nine samples were subsequently submitted for DNA methylation profiling using the Infinium Human Methylation EPIC Bead Chip array for analysis applying the sarcoma classifier and copy number analysis (Table and Fig. ). We excluded one sample (Patient 3) from further analysis due to poor DNA quality and CNV plots of three additional patient (patient 1, 6 and 8) were also deemed not evaluable due to low quality. We attempted a tumor classification by using the DKFZ Sarcoma Classifier platform version 12 ( www.molecularsarcomapathology.org ) and the EpiDiP server ( www.epidip.org ) hosted from the University Hospital Basel, Switzerland. Using the DKFZ sarcoma classifier no successful prediction (calibrated score >0.9 score) could be established for any of the nine cases. This calibrated score indicates the probability of the confidence for the given methylation class assignment. Our cases had a median score of 0.6 (0.4–0.89) excluding two samples that failed to provide a score. However, seven of the nine tumors were classified as closely related tumor methylation classes such as for example epithelioid sarcoma. The calibration scores and related methylation classes are provided in Table . Copy number profiles are provided in Supplementary Fig. . None of the cases showed methylated CpGs in the putative promotor region of the SMARCB1 gene, in line with earlier reports . In patient 4, the only detected and driving alteration was a homozygous loss of SMARCB1 (INI1) what is reflected in the related methylation class of a malignant rhabdoid tumor. This patient’s tumor showed a slight smoking signature but has no smoking related alterations detected otherwise. In five other patients (patients 1, 2, 3, 5, 7) SMARCB1 (INI1) alterations (loss and mutations) are accompanied by the loss or inactivation of genes with a tumor suppressive role such as CDKN2A , ATM, NF2 and PARK2 representing a complex genotypes such as for example seen in epithelioid sarcoma . These tumors were wild type for the TP53 gen as more commonly described in epithelioid sarcoma. Methylation analysis of these 5 tumors showed an enriched relation to known SMARCB1 (INI1) driven entities. In patients 6, 8 and 9 we had concomitant SMARCB1 (INI1) loss and a TP53 missense mutation together with the inactivation of other tumor suppressors such as CDKN2A and RB1 . Interestingly, patient 9 related closest to the group of epithelioid sarcoma despite harboring a TP53 mutation. A principal component analysis (PCA) showed clustering of some patients (patient 4 + 9, 5 + 7 and 6 + 8). Interestingly, patient 6 and 8 seem to cluster separately from the rest of the cohort (Fig. ). These two patients harbor the highest mutational load with alterations in TP53 and RB1 and show a smoking related signature.
The discovery that genes encoding subunits of SWI/SNF complexes show genomic alterations across a wide variety of cancer types is about a decade old and consequently, our understanding of the mechanisms and the potential therapeutic implications remains in its infancy . SWI/SNF complex-deficient carcinomas and mesenchymal tumors commonly share a discohesive epithelioid or rhabdoid morphology and this should guide the use of markers such as INI1 ( SMARCB1 ) and BRG1 ( SMARCA4) in the diagnostic work-up. For thoracic epithelioid neoplasms with lack of TTF1 positivity, a BRG1 ( SMARCA4 ) staining should be considered in order to elucidate a TTF1 negative, SMARCA4 - deficient non-small cell lung cancer . They usually lack undifferentiated/sarcomatoid features. In tumors with undifferentiated components such as round cell or rhabdoid morphology and loss of BRG1 ( SMARCA4 ), the diagnosis of a SMARCA4 - deficient undifferentiated tumor should be made. SMARCA4 - deficient undifferentiated tumors in the thorax are usually smoking related . The investigation of an INI1 ( SMARCB1 ) deficient neoplasm is highly recommended in such cases when BRG1 ( SMARCA4 ) is retained. This should be considered independent of the age of the patient and not be misguided by the location and the cytokeratin status of the tumor. We here presented nine patients with an intrathoracic neoplasm with immunohistochemical loss of INI1 ( SMARCB1 ). We investigated if these intrathoracic SMARCB1 - deficient neoplasms represent an own unique entity. All of our cases showed retained expression of BRG1 and no genomic alteration in SMARCA4 or methylation events, clearly demarcating them from the spectrum of SMARCA4 - deficient non-small cell lung cancers and SMARCA4 - deficient undifferentiated tumors . Eight out of nine cases show overlapping morphology with both categories, SWI/SNF complex deficient carcinomas and sarcomas. The differentiation between these two categories is particularly challenging. We therefore evaluated Claudin-4, a useful marker in the distinction between carcinoma and mesothelioma , but also between carcinoma and sarcoma . In a study by Schaefer et al. Claudin-4 expression was detected in 80% of SWI/SNF complex-deficient undifferentiated carcinomas compared with only 4% of sarcomas with epithelioid morphology. However, carcinomas with complete loss of Claudin-4 expression have also been described. In our cohort, Claudin-4 and pan-cytokeratin AE1/AE3 co-expression was identified in three tumors with mixed epithelioid/rhabdoid or solid growth pattern. In two of them (patient 6 and 8), the strong Claudin-4 staining was in favor of a carcinoma diagnosis. In our opinion Claudin-4 is helpful in the differentiation of SMARCB1 (INI1) - deficient carcinoma and sarcoma. Furthermore, we explored genomic and methylation profiling and believe that a proper molecular work up can contribute to a more accurate classification. Nevertheless, using methylation profiling was not straightforward. First, some of our samples had a low tumor purity with probable negative impact on the calibration scores. However, recent data suggests that this should not affect the accuracy of the prediction . Second, although the EpiDip classifier includes a broad tumor entity spectrum accounting for many carcinoma subtypes, no definite match was found here (Table ).The DKFZ classifier does not yet include all tumor entities and tumor subtypes, why predictions above the threshold are not to be expected for all tumors in this classifier. Although all nine cases failed to be successfully classified (calibrating scores: < 0.9), it is noteworthy that seven cases were matching closely to related tumor entities. In the majority of cases we see a relation to known SMARCB1 driven sarcomas such as MRT (patient 4) and epithelioid sarcoma (patients 1, 2, 3, 5, and 7). MRT of the mediastinum is a rare aggressive tumor, with less than 30 cases reported in adults . The same accounts for proximal type epithelioid sarcoma in the mediastinum with even fever reported cases – . For a better differentiation of these two entities, molecular work-ups might be helpful. Unlike MRT, epithelioid sarcomas harbor in addition to SMARCB1 (INI1) alterations multiple copy number gains and losses throughout the genome as seen for example in case 1 and 7, in contrast to case 4 , . Other interesting work has shown that epithelioid sarcoma and MRT show differences in miRNA expression, however this might be more challenging to include in a routine clinical work up . In general, pathologists have to be aware of these entities, as two of the cases (patient 4 and 7) were misclassified as mesothelioma due to the tumor location, despite negative common mesothelial markers. Few mesothelioma cases were reported harboring loss of INI1 (SMARCB1) protein expression, but these retained positivity for common mesothelial markers , . Principal component analysis of methylation data showed clustering of single patients. Two patients with multiple genomic rearrangements/ complex molecular profiles and a smoking signature (patients 6 and 8) clustered separately. Patient 6 has an intrapulmonary neoplasm with a pattern of metastasis typical for lung carcinoma (lymph nodes, adrenal gland, and liver). Additionally, genomic profile includes mutations in TP53 and RB1 , as commonly seen in SCLC and LCNEC. Immunohistochemical co-expression of pan-cytokeratin and claudin4 further support an epithelial lineage. Therefore, this case probably fits best in the category of large cell lung carcinoma with an additional SMARCB1 alteration as a later event in the evolution of this tumor. Large cell carcinomas are an understudied entity but seem to be closely related to LCNEC and SCLC on a genomic level – . In the same category falls case 8 with an intrapulmonary lesion with co-expression of pan-cytokeratin and claudin4 and additionally alterations in TP53 and RB1 . Classical neuroendocrine morphology and positive synaptophysin immunohistochemistry further support the diagnosis of an LCNEC. A borderline case is patient 9 with a mass in the posterior mediastinum around the esophagus. This case showed a mixed type histomorphology with tumor cells positive for pan-cytokeratin and patchy for claudin 4. In addition, the molecular profile was more complex with additional alterations in TP53, MDM2, and PARK2 challenging the diagnosis of an epithelioid sarcoma versus a carcinoma. However, the closest methylation class in this case was an epithelioid sarcoma. Based on our analyses performed, we show that SMARCB1 (INI1)-deficient neoplasms are very rare and most likely represent a spectrum of known tumor types, namely epithelioid sarcoma, MRT and undifferentiated carcinoma rather than a distinct entity. We elaborate that molecular analyses can help to better categorize these tumors. A possible limitation of this study is that cases with INI1 ( SMARCB1 ) proficient protein expression but potential SMARCB1 alterations (e.g., mutations) would not have been detected, as retained nuclear staining was a criterion of exclusion in this study. A possible mechanism of loss of nuclear labeling of INI1 ( SMARB1 ) immunohistochemistry can be mediated by structural variants involving the not covered intronic regions or through epigenetic or post-translational regulation . However, in the present study and in earlier work no methylation events were detected in the CpGs of the promotor region . The fact that genes encoding SWI/SNF components are mutated in cancer and show a dismal prognosis raises several key questions, including whether such mutations, despite promoting cancer growth, result in synthetic lethal dependencies. From a therapeutic standpoint, it is of major importance to define whether any such dependencies are specific to the particular subunit that is mutated and/or the tissue of origin, or whether the mutations confer shared synthetic lethal dependencies regardless of which subunit is mutated. Emerging data indicate that mutations in SWI/SNF genes do indeed result in vulnerabilities in cancers, some of which are subunit and/or cell-type specific, although others are potentially more broadly applicable. The pursuit of therapeutic translation is underway for several of these vulnerabilities, with a number of treatment approaches being tested in clinical trials. In tumors with loss of INI1 (SMARCB1) clinical and preclinical evidence suggests possible sensitivity to targeted therapies, including EZH2 inhibitors and anti-PD-1 immune checkpoint inhibition – . Additional new options are on the way, as most recent in vitro data suggest synergistic action of WDR5 and HDM2 inhibitors in SMARCB1 -deficient cancer cells . We currently face an absence of approved therapies for SMARB1/INI1 deficient tumors but as clinical trials are evolving that include INI1 ( SMARCB1 ) deficient tumors it is important to identify these rare individual patients. We conclude that a proper diagnostic classification of intrathoracic tumors with INI1 ( SMARCB1 ) deficiency remains challenging. The diagnostic work up should be guided by histomorphology and immunohistochemistry, without being misguided by tumor location, age or clinical presentation. The differentiation between epithelioid sarcoma, MRT and SMARCB1 deficient carcinoma might only be possible with the help of molecular profiling. However, an accurate diagnosis is important for best patient care and the correct diagnosis will influence treatment decisions as clinical trials and more targeted therapeutic options are emerging.
Supplementary Material
|
Tailoring and Evaluating Treatment with the Patient-Specific Needs Evaluation: A Patient-Centered Approach | 1a65936c-d71b-45fb-baff-2ec33531c9c9 | 11412570 | Patient-Centered Care[mh] | Study Design This was a user-centered mixed-methods study of patients with hand or wrist conditions, health care providers, and other stakeholders. We used the Consensus-Based Standards for the Selection of Health Measurement Instruments guidelines on patient-reported outcome measure development and measurement properties. Setting We developed the PSN at Erasmus Medical Center (an academic hospital) and Xpert Clinics (a specialized clinic for hand and wrist care) in the Netherlands. Data were collected at Xpert Clinics between July and August of 2023. The medical ethics review committee of Erasmus Medical Center approved this study, and all participants provided informed consent. Research Team The core research team consisted of hand surgeons and therapists (W.A.d.R., Y.E.v.K., R.M.W., S.E.R.H., G.R.A., A.d.R., G.M.V., and J.C.M.), professionals with experience in developing measurement sets and tools (R.M.W., S.E.R.H., H.P.S., J.C.M., and R.W.S.), , – and electronic data capturing and implementation experts (H.P.S., Y.E.v.K., R.M.W., R.W.S., S.E.R.H., J.C.M., G.M.V., and W.A.d.R.). , We consulted other clinicians, language experts, and native English speakers. PSN Development Process Development of the PSN was iterative and comprised 5 overlapping stages, with each stage informing subsequent stages (Fig. ). Stage 1 included literature studies and expert meetings. After developing an item bank, we conducted a pilot study and survey on completeness and understandability in stage 2. Stage 3 included cognitive debriefing of patients and clinicians and refining of the item bank. We gathered expert input in stage 4, and consulted a language expert, performed crosscultural translation, and repeated the survey for the final PSN in stage 5 (for more details, see Fig. ). Participants We used different samples to develop the PSN and establish the discriminative validity and test-retest reliability (Fig. ). For all samples, patients were eligible if they were adults, had any hand or wrist condition, completed our intake questionnaire, and understood the Dutch language. All questionnaires were completed digitally. For the survey, we excluded patients who gave inconsistent answers (eg, stating “fair” on understandability but stating that all is clear in the associated comments box). For discriminative validity, we included patients who completed the PSN at baseline and at 3-month follow-up, as well as the Satisfaction with Treatment Results Questionnaire at 3 months. , We prospectively invited patients to participate in a test-retest study and complete the PSN for a second time 3 to 5 days after initial completion. The retest remained accessible for 6 days (ie, a possible time interval of 3 to 11 days). We hypothesized that patient needs and goals would remain stable during this period. We included patients in the test-retest analysis if they completed both the primary and retest PSN before clinician consultation. The Consensus-Based Standards for the Selection of Health Measurement Instruments advise a sample size of more than 100 participants when examining test-retest reliability. To describe the results of the final PSN, we included all patients who completed the PSN at baseline and at 3-month follow-up. There were no additional exclusion criteria. All samples reflected the target population (patients with hand and wrist conditions) and differed in age, sex, and treatment location. Discriminative Validity, Test-Retest Reliability, and Statistical Analysis We evaluated discriminative validity by comparing the satisfaction with treatment results level of patients who did and did not obtain their PMG. At 3 months, we used a Satisfaction with Treatment Results questionnaire, , which evaluates satisfaction using a 7-point Likert scale ranging from extremely dissatisfied to extremely satisfied. Using chi-squared tests, we determined the PMG’s discriminative power. We computed the Cramer V to interpret the effect size, where 0.10 reflects a small effect size, 0.30 reflects a medium effect size, and 0.50 reflects a large effect size. We evaluated test-retest reliability by computing the absolute agreement and Cohen kappa. We computed intraclass correlation coefficients for all variables, including goal domain, baseline score, score needed to be satisfied with the most important goal domain, and PMG. Kappa scores lie between −1 and 1, with 0 or less indicating no agreement; 0.01 to 0.20, no to slight agreement; 0.21 to 0.40, fair agreement; 0.41 to 0.60, moderate agreement; 0.61 to 0.80, substantial agreement; and 0.81 to 1.00, almost perfect agreement. We calculated intraclass correlation coefficients using a 2-way mixed-effects model. The intraclass correlation coefficients range from 0 to 1, with 1 being perfect reliability; 0.90 to 0.99, very high reliability; 0.70 to 0.89, high reliability; 0.50 to 0.69, moderate reliability; 0.26 to 0.49, low reliability; and 0.00 to 0.25, little, if any, reliability. – There were no missing data in the final PSN, as completing it before clinician consultation is mandatory in our clinical setting. We analyzed missing data patterns for the test-retest analyses. Patients who completed both the primary and retest tests were responders, and patients without a retest were nonresponders. We compared baseline characteristics of responders and nonresponders using significance testing and calculating standardized mean differences to investigate whether they differed systematically. R statistical software version 4.1.1 was used for the quantitative analyses, and P < 0.05 was considered significant. We tested the Dutch version of the PSN.
This was a user-centered mixed-methods study of patients with hand or wrist conditions, health care providers, and other stakeholders. We used the Consensus-Based Standards for the Selection of Health Measurement Instruments guidelines on patient-reported outcome measure development and measurement properties.
We developed the PSN at Erasmus Medical Center (an academic hospital) and Xpert Clinics (a specialized clinic for hand and wrist care) in the Netherlands. Data were collected at Xpert Clinics between July and August of 2023. The medical ethics review committee of Erasmus Medical Center approved this study, and all participants provided informed consent.
The core research team consisted of hand surgeons and therapists (W.A.d.R., Y.E.v.K., R.M.W., S.E.R.H., G.R.A., A.d.R., G.M.V., and J.C.M.), professionals with experience in developing measurement sets and tools (R.M.W., S.E.R.H., H.P.S., J.C.M., and R.W.S.), , – and electronic data capturing and implementation experts (H.P.S., Y.E.v.K., R.M.W., R.W.S., S.E.R.H., J.C.M., G.M.V., and W.A.d.R.). , We consulted other clinicians, language experts, and native English speakers.
Development of the PSN was iterative and comprised 5 overlapping stages, with each stage informing subsequent stages (Fig. ). Stage 1 included literature studies and expert meetings. After developing an item bank, we conducted a pilot study and survey on completeness and understandability in stage 2. Stage 3 included cognitive debriefing of patients and clinicians and refining of the item bank. We gathered expert input in stage 4, and consulted a language expert, performed crosscultural translation, and repeated the survey for the final PSN in stage 5 (for more details, see Fig. ).
We used different samples to develop the PSN and establish the discriminative validity and test-retest reliability (Fig. ). For all samples, patients were eligible if they were adults, had any hand or wrist condition, completed our intake questionnaire, and understood the Dutch language. All questionnaires were completed digitally. For the survey, we excluded patients who gave inconsistent answers (eg, stating “fair” on understandability but stating that all is clear in the associated comments box). For discriminative validity, we included patients who completed the PSN at baseline and at 3-month follow-up, as well as the Satisfaction with Treatment Results Questionnaire at 3 months. , We prospectively invited patients to participate in a test-retest study and complete the PSN for a second time 3 to 5 days after initial completion. The retest remained accessible for 6 days (ie, a possible time interval of 3 to 11 days). We hypothesized that patient needs and goals would remain stable during this period. We included patients in the test-retest analysis if they completed both the primary and retest PSN before clinician consultation. The Consensus-Based Standards for the Selection of Health Measurement Instruments advise a sample size of more than 100 participants when examining test-retest reliability. To describe the results of the final PSN, we included all patients who completed the PSN at baseline and at 3-month follow-up. There were no additional exclusion criteria. All samples reflected the target population (patients with hand and wrist conditions) and differed in age, sex, and treatment location.
We evaluated discriminative validity by comparing the satisfaction with treatment results level of patients who did and did not obtain their PMG. At 3 months, we used a Satisfaction with Treatment Results questionnaire, , which evaluates satisfaction using a 7-point Likert scale ranging from extremely dissatisfied to extremely satisfied. Using chi-squared tests, we determined the PMG’s discriminative power. We computed the Cramer V to interpret the effect size, where 0.10 reflects a small effect size, 0.30 reflects a medium effect size, and 0.50 reflects a large effect size. We evaluated test-retest reliability by computing the absolute agreement and Cohen kappa. We computed intraclass correlation coefficients for all variables, including goal domain, baseline score, score needed to be satisfied with the most important goal domain, and PMG. Kappa scores lie between −1 and 1, with 0 or less indicating no agreement; 0.01 to 0.20, no to slight agreement; 0.21 to 0.40, fair agreement; 0.41 to 0.60, moderate agreement; 0.61 to 0.80, substantial agreement; and 0.81 to 1.00, almost perfect agreement. We calculated intraclass correlation coefficients using a 2-way mixed-effects model. The intraclass correlation coefficients range from 0 to 1, with 1 being perfect reliability; 0.90 to 0.99, very high reliability; 0.70 to 0.89, high reliability; 0.50 to 0.69, moderate reliability; 0.26 to 0.49, low reliability; and 0.00 to 0.25, little, if any, reliability. – There were no missing data in the final PSN, as completing it before clinician consultation is mandatory in our clinical setting. We analyzed missing data patterns for the test-retest analyses. Patients who completed both the primary and retest tests were responders, and patients without a retest were nonresponders. We compared baseline characteristics of responders and nonresponders using significance testing and calculating standardized mean differences to investigate whether they differed systematically. R statistical software version 4.1.1 was used for the quantitative analyses, and P < 0.05 was considered significant. We tested the Dutch version of the PSN.
Development Process: Cognitive Debriefing and Survey Data We performed 16 cognitive interviews among 9 patients and 7 clinicians. All patients (3 men and 6 women; age range, 21 to 71 years; median age, 51 years) had different diagnoses. We also included patients with lower levels of education. Among clinicians, we interviewed 1 occupational hand therapist, 2 physical hand therapists, and 4 hand surgeons (5 men and 2 women, age range, 27 to 70 years; median age, 40 years). We iteratively improved the PSN, alternating between interviewing and adjusting (eg, we shortened the introduction and explanation texts; changed the answer scale for pain, tingling, and sensitivity; and simplified the text with a language expert). ( See Table, Supplemental Digital Content 1 , which shows the conceptual framework of the PSN derived from cognitive interviews with patients [ n = 9], http://links.lww.com/PRS/H13 . See Table, Supplemental Digital Content 2 , which shows the conceptual framework of the PSN derived from cognitive interviews with clinicians [ n = 7], http://links.lww.com/PRS/H14 .) The survey on the final PSN indicated that the questions and answer options were rated entirely or mostly understandable by 90% to 92% and fully or mostly complete by 84% to 89% of the 275 participants. ( See Figure, Supplemental Digital Content 3 , which shows pie charts indicating the understandability and completeness of the questions and response options on information needs [ A, B , and C ], treatment goals, and PMG [ D, E , and F ]. The survey indicated that 90% considered the questions on information need entirely or mostly understandable, 91% considered the answer options entirely or mostly understandable, and 84% rated the answer options as entirely or mostly complete. For the treatment goals and PMG, this was 92%, 91%, and 89%, respectively, http://links.lww.com/PRS/H15 .) For the pilot PSN ( n = 223), the questions and answer options were rated entirely or mostly understandable by 89% to 93% and fully or mostly complete by 86% to 91%. The Final PSN Because of the dependencies within the PSN, it works best in digital form. It can be accessed at https://personeel.equipezorgbedrijven.nl/ls/index.php?r=survey/index&sid=587344&lang=en (see Table for a nondigital version). The intake PSN has 5 questions and takes approximately 3 minutes to complete. The information needs section asks an open question about the patient’s reason for making an appointment at the clinic (the patient’s request for help), followed by a single-select question where respondents pick their most important information need category. Respondents then select a predefined subanswer based on that category, to specify their information need in more detail. The treatment goal section of the PSN asks respondents to choose which domain they would most like to improve if they were to be treated and to rate their baseline score on that domain on a scale of 0 to 10 (eg, the baseline pain score). Respondents have the option of selecting 2 secondary goal domains. The final question asks for the score they think they need to achieve with treatment to be satisfied. The PMG is then generated automatically as the difference between the respondent’s baseline performance rating and the score needed for the patient to be satisfied (Fig. ). The follow-up PSN evaluates the previously selected information needs and treatment and improvement goals in only 2 questions, and takes less than 1 minute to complete. The final PSN was completed by 2860 patients (Table ). Figure shows the selected information need categories, and Figure shows the distribution of the selected treatment goals. The rating on the most important domain was normally distributed, with a median score of 4 (Fig. ). The median score needed for the patient to be satisfied with the treatment result was 9 (Fig. ). Discriminative Validity and Test-Retest Reliability We included 1985 patients for the discriminative validity analysis (Table ). Patients who obtained their PMG had better satisfaction with treatment results than those who did not (Fig. ) ( P < 0.001). There was a medium to large effect size (Cramer’s V: 0.48), indicating that the PMG has excellent discriminative validity (ie, the ability to distinguish satisfied from dissatisfied patients). For test-retest reliability, 102 of the 139 invited patients completed both the primary test and the retest within a median interval of 7 days (range, 3 to 11 days). We found small differences between responders and nonresponders in age and type of work. ( See Table, Supplemental Digital Content 4 , which shows nonresponder analysis for the test-retest study, http://links.lww.com/PRS/H16 .) There was moderate agreement and reliability for the most important goal domain (Table ). ( See Table, Supplemental Digital Content 5 , which shows how often the most important goal domain was chosen at the primary test as well as at the retest. The values correspond to the number of patients and the percentage of the row total, except for the “row total” column, where the percentages correspond to the percentage of the column total, http://links.lww.com/PRS/H17 .) When the most important goal domain was also chosen as a secondary goal domain in the retest, the test-retest improved to substantial agreement and high reliability (Table ). ( See Table, Supplemental Digital Content 6 , which demonstrates how often the most important goal domain was chosen at the primary test and also as the most important or as secondary goal domain at the retest. The values correspond to the number of patients and the percentage of the row total, except for the “row total” column, where the percentages correspond to the percentage of the column total, http://links.lww.com/PRS/H18 .) We found moderate reliability for the baseline score on the most important goal domain, for the score the patient needed to be satisfied, and for the PMG (Table ).
We performed 16 cognitive interviews among 9 patients and 7 clinicians. All patients (3 men and 6 women; age range, 21 to 71 years; median age, 51 years) had different diagnoses. We also included patients with lower levels of education. Among clinicians, we interviewed 1 occupational hand therapist, 2 physical hand therapists, and 4 hand surgeons (5 men and 2 women, age range, 27 to 70 years; median age, 40 years). We iteratively improved the PSN, alternating between interviewing and adjusting (eg, we shortened the introduction and explanation texts; changed the answer scale for pain, tingling, and sensitivity; and simplified the text with a language expert). ( See Table, Supplemental Digital Content 1 , which shows the conceptual framework of the PSN derived from cognitive interviews with patients [ n = 9], http://links.lww.com/PRS/H13 . See Table, Supplemental Digital Content 2 , which shows the conceptual framework of the PSN derived from cognitive interviews with clinicians [ n = 7], http://links.lww.com/PRS/H14 .) The survey on the final PSN indicated that the questions and answer options were rated entirely or mostly understandable by 90% to 92% and fully or mostly complete by 84% to 89% of the 275 participants. ( See Figure, Supplemental Digital Content 3 , which shows pie charts indicating the understandability and completeness of the questions and response options on information needs [ A, B , and C ], treatment goals, and PMG [ D, E , and F ]. The survey indicated that 90% considered the questions on information need entirely or mostly understandable, 91% considered the answer options entirely or mostly understandable, and 84% rated the answer options as entirely or mostly complete. For the treatment goals and PMG, this was 92%, 91%, and 89%, respectively, http://links.lww.com/PRS/H15 .) For the pilot PSN ( n = 223), the questions and answer options were rated entirely or mostly understandable by 89% to 93% and fully or mostly complete by 86% to 91%.
Because of the dependencies within the PSN, it works best in digital form. It can be accessed at https://personeel.equipezorgbedrijven.nl/ls/index.php?r=survey/index&sid=587344&lang=en (see Table for a nondigital version). The intake PSN has 5 questions and takes approximately 3 minutes to complete. The information needs section asks an open question about the patient’s reason for making an appointment at the clinic (the patient’s request for help), followed by a single-select question where respondents pick their most important information need category. Respondents then select a predefined subanswer based on that category, to specify their information need in more detail. The treatment goal section of the PSN asks respondents to choose which domain they would most like to improve if they were to be treated and to rate their baseline score on that domain on a scale of 0 to 10 (eg, the baseline pain score). Respondents have the option of selecting 2 secondary goal domains. The final question asks for the score they think they need to achieve with treatment to be satisfied. The PMG is then generated automatically as the difference between the respondent’s baseline performance rating and the score needed for the patient to be satisfied (Fig. ). The follow-up PSN evaluates the previously selected information needs and treatment and improvement goals in only 2 questions, and takes less than 1 minute to complete. The final PSN was completed by 2860 patients (Table ). Figure shows the selected information need categories, and Figure shows the distribution of the selected treatment goals. The rating on the most important domain was normally distributed, with a median score of 4 (Fig. ). The median score needed for the patient to be satisfied with the treatment result was 9 (Fig. ).
We included 1985 patients for the discriminative validity analysis (Table ). Patients who obtained their PMG had better satisfaction with treatment results than those who did not (Fig. ) ( P < 0.001). There was a medium to large effect size (Cramer’s V: 0.48), indicating that the PMG has excellent discriminative validity (ie, the ability to distinguish satisfied from dissatisfied patients). For test-retest reliability, 102 of the 139 invited patients completed both the primary test and the retest within a median interval of 7 days (range, 3 to 11 days). We found small differences between responders and nonresponders in age and type of work. ( See Table, Supplemental Digital Content 4 , which shows nonresponder analysis for the test-retest study, http://links.lww.com/PRS/H16 .) There was moderate agreement and reliability for the most important goal domain (Table ). ( See Table, Supplemental Digital Content 5 , which shows how often the most important goal domain was chosen at the primary test as well as at the retest. The values correspond to the number of patients and the percentage of the row total, except for the “row total” column, where the percentages correspond to the percentage of the column total, http://links.lww.com/PRS/H17 .) When the most important goal domain was also chosen as a secondary goal domain in the retest, the test-retest improved to substantial agreement and high reliability (Table ). ( See Table, Supplemental Digital Content 6 , which demonstrates how often the most important goal domain was chosen at the primary test and also as the most important or as secondary goal domain at the retest. The values correspond to the number of patients and the percentage of the row total, except for the “row total” column, where the percentages correspond to the percentage of the column total, http://links.lww.com/PRS/H18 .) We found moderate reliability for the baseline score on the most important goal domain, for the score the patient needed to be satisfied, and for the PMG (Table ).
The PSN focuses on patient-specific information needs and treatment goals, and supports patient-centered care. Although developed in hand and wrist patients, the PSN can be easily modified to unlock its potential for generalization by altering answer options. As part of the PSN, we introduce the PMG as a valid parameter of the improvement an individual wants to obtain in a domain relevant to that individual, given the pretreatment score. How to Use the PSN The PSN can be used as a conversation starter, decision support tool, and expectation management tool during the first consultation. The information needs section helps clinicians to effectively provide information and tailor information provision to the individual patient. For example, knowing a patient’s tendency toward surgery may guide how a clinician proposes noninvasive treatment when more appropriate. The treatment goal aids realistic goal setting, such as if a patient with Dupuytren disease wants to improve his or her hand appearance, but it is unlikely that this will be achieved with treatment. The PMG helps to identify and discuss expectations (eg, if one wants to improve from 2 to 10 to be satisfied, although this may be unrealistic due to comorbidity or symptom duration). The PSN also evaluates treatment success at a personal level. There was moderate agreement and reliability for the most important goal domain. However, these improved to a substantial agreement and high reliability when also considering agreement if the most important goal domain was also a secondary goal domain in the retest. This indicates that the PSN’s reliability is good enough to identify all patient-relevant goals. Thus, patients find it hard to distinguish between their most important goal and their secondary goal, which may overlap. Our finding that most patients who obtained their PMG were satisfied with their treatment results suggests that their satisfaction was independent of whether their PMG was on their factual primary goal, confirming the PSN’s usability. Clinicians should always consider all goals and not just the most important goal domain. Key Considerations User participation during the development, iterative approach, pilot testing, and mixed-methods study resulted in a content-valid, discriminative, and reliable patient-centered tool. The PSN was easily implemented, and patients deemed it relevant, complete, and understandable. The PSN helps patients prepare for their first consultation, enhances awareness, empowers them to take control of their treatment, and aids shared decision-making. The clinicians indicated that the PSN helps them to identify patients with high or low expectations and respond accordingly. These aspects may improve patients’ experience, expectation management, satisfaction with treatment results, and clinical outcomes. Compared with existing tools, – the PSN adds value. For example, the Canadian Occupational Performance Measure, Goal Attainment Scaling, and Patient-Specific Goalsetting Method tools are completed together with a health care provider. They are relatively time-consuming in clinical practice, and there is a risk of “therapist bias,” as a practitioner may influence these goals. Other tools do not assess patient-specific improvement goals and their relationship with satisfaction with treatment results, whereas the PSN does (ie, the PMG). Furthermore, in contrast with current tools, such as the Patient-Specific Functional Scale, Canadian Occupational Performance Measure, and Patient-Specific Goalsetting Method, the PSN allows distinct International Classification of Functioning, Disability, and Health domains, instead of focusing only on the activities and participation levels. None of the aforementioned tools assesses information needs, but the PSN does. Altogether, the PSN is a unique tool with added value in daily clinic and research. The distribution of the information need category and goal domain indicates that patients have different needs and goals. This highlights that a personalized treatment strategy, which can be informed by the PSN, is essential. Further, although most people wanted to reach a 9 to be satisfied, many patients consider lower scores satisfactory (ie, not all patients aim for the maximum score). The wide distribution indicates that this is indeed a personalized score, which further adds to the value of the PSN. The PMG distinguished satisfied patients from dissatisfied patients very well, indicating that it can be used to evaluate the clinical relevance of treatment effects. The PMG is especially beneficial, as it is determined before clinician consultation, providing a proxy for satisfaction with treatment results at a very early stage, presuming what patients think they want is what will satisfy them. Future research may investigate whether the PMG has a greater discriminative capacity for satisfaction than traditional values, such as the minimal important change or the patient acceptable symptom state. At our sites, a clinician dashboard is used that displays patient characteristics, patient-reported outcome measures, clinician-reported outcomes (eg, goniometry), and prediction models. With the PSN added, health care can be further personalized and data-driven. Nevertheless, the PSN is also valuable as a standalone tool. We distribute the PSN before surgeon consultation. If treatment is scheduled (eg, surgery or therapy), we allow patients to change previous answers. For example, the patient’s goal may have changed following expectation management during consultation. This strategy is, of course, optional. Limitations Respondents indicate their most important needs and goals without knowing their diagnosis. It may also be difficult for individuals to accurately predict how they will feel about a future score, such as a 9 or 10, since this is an abstract idea that may not match their actual experience when they reach that level. However, focusing on the patient’s most important needs and goals at this early stage benefits clinicians, as they may use these factors in decision-making and expectation management. Although some items may be moving targets (ie, a response shift, as goals may change over time), the PSN discriminated effectively between satisfied and dissatisfied patients. Future research could investigate how needs and goals change over time. The PSN does not replace traditional outcome measures, and additional time investment should be considered when using it. Another limitation is the test-retest nonresponse. The small differences between responders and nonresponders seem clinically irrelevant, as age and type of work are unlikely to influence test-retest reliability. Nevertheless, although inevitable in test-retest studies, this may have influenced our findings. We addressed most issues mentioned by respondents, but we kept the maximum number of information need categories respondents could choose. Obviously, patients have more questions, and clinicians should try to answer them all. However, we considered it essential that, at the least, the most important question is identified and answered, as there is a maximum information load that people can absorb. Therefore, it is essential to see the PSN as a conversation starter. In addition, patients might be better prepared by knowing their most important question. Another limitation is that we excluded patients with inconsistent answers on the survey. This may have influenced our findings on the understandability of the PSN. However, if we had included these patients, our findings would have been biased; thus, we believe that our decision was the best solution to minimize bias. In addition, although the participants had different educational levels (including lower levels), it remains challenging to reach lower-literacy patients. Future research may specifically target these patients. Although we performed a crosscultural translation to English, we only tested the Dutch version. Future studies may investigate the PSN in different languages and cultural settings.
The PSN can be used as a conversation starter, decision support tool, and expectation management tool during the first consultation. The information needs section helps clinicians to effectively provide information and tailor information provision to the individual patient. For example, knowing a patient’s tendency toward surgery may guide how a clinician proposes noninvasive treatment when more appropriate. The treatment goal aids realistic goal setting, such as if a patient with Dupuytren disease wants to improve his or her hand appearance, but it is unlikely that this will be achieved with treatment. The PMG helps to identify and discuss expectations (eg, if one wants to improve from 2 to 10 to be satisfied, although this may be unrealistic due to comorbidity or symptom duration). The PSN also evaluates treatment success at a personal level. There was moderate agreement and reliability for the most important goal domain. However, these improved to a substantial agreement and high reliability when also considering agreement if the most important goal domain was also a secondary goal domain in the retest. This indicates that the PSN’s reliability is good enough to identify all patient-relevant goals. Thus, patients find it hard to distinguish between their most important goal and their secondary goal, which may overlap. Our finding that most patients who obtained their PMG were satisfied with their treatment results suggests that their satisfaction was independent of whether their PMG was on their factual primary goal, confirming the PSN’s usability. Clinicians should always consider all goals and not just the most important goal domain.
User participation during the development, iterative approach, pilot testing, and mixed-methods study resulted in a content-valid, discriminative, and reliable patient-centered tool. The PSN was easily implemented, and patients deemed it relevant, complete, and understandable. The PSN helps patients prepare for their first consultation, enhances awareness, empowers them to take control of their treatment, and aids shared decision-making. The clinicians indicated that the PSN helps them to identify patients with high or low expectations and respond accordingly. These aspects may improve patients’ experience, expectation management, satisfaction with treatment results, and clinical outcomes. Compared with existing tools, – the PSN adds value. For example, the Canadian Occupational Performance Measure, Goal Attainment Scaling, and Patient-Specific Goalsetting Method tools are completed together with a health care provider. They are relatively time-consuming in clinical practice, and there is a risk of “therapist bias,” as a practitioner may influence these goals. Other tools do not assess patient-specific improvement goals and their relationship with satisfaction with treatment results, whereas the PSN does (ie, the PMG). Furthermore, in contrast with current tools, such as the Patient-Specific Functional Scale, Canadian Occupational Performance Measure, and Patient-Specific Goalsetting Method, the PSN allows distinct International Classification of Functioning, Disability, and Health domains, instead of focusing only on the activities and participation levels. None of the aforementioned tools assesses information needs, but the PSN does. Altogether, the PSN is a unique tool with added value in daily clinic and research. The distribution of the information need category and goal domain indicates that patients have different needs and goals. This highlights that a personalized treatment strategy, which can be informed by the PSN, is essential. Further, although most people wanted to reach a 9 to be satisfied, many patients consider lower scores satisfactory (ie, not all patients aim for the maximum score). The wide distribution indicates that this is indeed a personalized score, which further adds to the value of the PSN. The PMG distinguished satisfied patients from dissatisfied patients very well, indicating that it can be used to evaluate the clinical relevance of treatment effects. The PMG is especially beneficial, as it is determined before clinician consultation, providing a proxy for satisfaction with treatment results at a very early stage, presuming what patients think they want is what will satisfy them. Future research may investigate whether the PMG has a greater discriminative capacity for satisfaction than traditional values, such as the minimal important change or the patient acceptable symptom state. At our sites, a clinician dashboard is used that displays patient characteristics, patient-reported outcome measures, clinician-reported outcomes (eg, goniometry), and prediction models. With the PSN added, health care can be further personalized and data-driven. Nevertheless, the PSN is also valuable as a standalone tool. We distribute the PSN before surgeon consultation. If treatment is scheduled (eg, surgery or therapy), we allow patients to change previous answers. For example, the patient’s goal may have changed following expectation management during consultation. This strategy is, of course, optional.
Respondents indicate their most important needs and goals without knowing their diagnosis. It may also be difficult for individuals to accurately predict how they will feel about a future score, such as a 9 or 10, since this is an abstract idea that may not match their actual experience when they reach that level. However, focusing on the patient’s most important needs and goals at this early stage benefits clinicians, as they may use these factors in decision-making and expectation management. Although some items may be moving targets (ie, a response shift, as goals may change over time), the PSN discriminated effectively between satisfied and dissatisfied patients. Future research could investigate how needs and goals change over time. The PSN does not replace traditional outcome measures, and additional time investment should be considered when using it. Another limitation is the test-retest nonresponse. The small differences between responders and nonresponders seem clinically irrelevant, as age and type of work are unlikely to influence test-retest reliability. Nevertheless, although inevitable in test-retest studies, this may have influenced our findings. We addressed most issues mentioned by respondents, but we kept the maximum number of information need categories respondents could choose. Obviously, patients have more questions, and clinicians should try to answer them all. However, we considered it essential that, at the least, the most important question is identified and answered, as there is a maximum information load that people can absorb. Therefore, it is essential to see the PSN as a conversation starter. In addition, patients might be better prepared by knowing their most important question. Another limitation is that we excluded patients with inconsistent answers on the survey. This may have influenced our findings on the understandability of the PSN. However, if we had included these patients, our findings would have been biased; thus, we believe that our decision was the best solution to minimize bias. In addition, although the participants had different educational levels (including lower levels), it remains challenging to reach lower-literacy patients. Future research may specifically target these patients. Although we performed a crosscultural translation to English, we only tested the Dutch version. Future studies may investigate the PSN in different languages and cultural settings.
The PSN is a novel, brief patient-reported tool for identifying individual patient needs and goals. By identifying these needs and goals, clinicians are better equipped to tailor information provision and treatment to the individual patient, enhancing the quality of care. The PSN can help patients to take control of their treatment. It is valid, reliable, and easy to use, especially, but not only, in digital form. The PSN is implementation-ready for hand and wrist care, and can easily be generalized to other fields. The PSN is provided with open access and is free to use.
Dr. Wouters received funding from ZonMw to support this research. The remaining authors have no conflicting interests in relation to the work presented in this article.
The authors thank all patients who completed questionnaires as part of their clinical care and agreed that their data could be used anonymously for the present study. In addition, the authors thank the members of the Hand-Wrist Study Group, clinicians, and personnel of Xpert Clinics, Xpert Handtherapie, and Equipe Zorgbedrijven for assisting in the routine outcome measurements that are the basis for this study.
The Hand-Wrist Study Group collaborators are as follows: Dirk-Johannes Jacobus Cornelis van der Avoort, MD; Ward Rogier Bijlsma, MD, PhD; Richard ArjenMichiel Blomme, MD; Herman Luitzen de Boer, MD; Gijs Marijn van Couwelaar, MD; Jan Debeij, MD, PhD; Jak Dekker, MSc; Reinier Feitz, MD, PhD; Alexandra Fink, PT; Kennard Harmsen, MD; Lisa Hoogendam, BSc; Steven Eric Ruden Hovius, MD, PhD; Rob van Huis, PT; Richard Koch, MD; Yara Eline van Kooij, PT, MSc; Jaimy Emerentiana Koopman, MD; Alexander Kroeze, MD; Nina Louisa Loos, MSc; Thybout Matthias Moojen, MD, PhD; Mark Johannes Willem van der Oest, PhD; Pierre-Yves Alain Adriaan Pennehouat; PT; Willemijn Anna de Ridder, PT, MSc; Johannes Pieter de Schipper, MD; Karin Schoneveld, PT, MSc; Ruud Willem Selles, PhD; Harm Pieter Slijper, PhD; Jeronimus Maria Smit, MD, PhD; Xander Smit, MD, PhD; John Sebastiaan Souer, MD, PhD; Marloes Hendrina Paulina ter Stege, MSc; Johannes Frederikes Maria Temming, MD; Joris Sebastiaan Teunissen, BSc; Jeroen Hein van Uchelen, MD, PhD; Joris Jan Veltkamp, PT; Guus Maarten Vermeulen, MD, PhD; Erik Taco Walbeehm, MD, PhD; Robbert Maarten Wouters, PT, PhD; Oliver Theodor Zöphel, MD, PhD; and Jelle Michiel Zuidam, MD, PhD.
|
Prädiktive Immunzytochemie beim nicht-kleinzelligen Lungenkarzinom | 725e534a-595d-4c72-8670-0ed8069ccdd5 | 9054884 | Anatomy[mh] | Allgemeine präanalytische, analytische und postanalytische Aspekte Zytologische Proben können auf unterschiedliche Weise verarbeitet werden: als konventionelle Abstriche, Zytospinpräparate oder Flüssigzytologien sowie FFPE-Zellblöcke (CB) . Der Einfachheit halber werden alle nicht als CB verarbeiteten zytologischen Proben in diesem Artikel als konventionelle Zytologie bezeichnet. Ein FFPE-CB ist die einfachste zytologische Präparationsart für die ICC, da standardisierte Protokolle, welche für histologische Proben etabliert wurden, verwendet werden können. Im Allgemeinen sind die Ergebnisse der ICC an CB robust und reproduzierbar. Es sollte jedoch bedacht werden, dass es keine standardisierte Methode für die CB-Herstellung gibt. So werden z. B. unterschiedliche Entnahmemedien und Vorfixierungsmittel (Ethanol, Formalin, Methanol) verwendet, die die Immunreaktivität beeinflussen und Anpassungen des IHC-Protokolls erfordern können . CB sind aber nicht immer verfügbar und weisen häufig eine zu tiefe Zellularität auf. Eine kürzlich durchgeführte Umfrage unter europäischen Labors hat gezeigt, dass Papanicolaou(Pap)-gefärbte oder luftgetrocknete konventionelle Zytologien immer noch das wichtigste zytologische Material für die ICC sind . Eine ICC auf dem diagnostischen, Pap-gefärbten Zytologiepräparat stellt sicher, dass die zu untersuchenden Zielzellen vorhanden sind. Es wird empfohlen, in der Lungenzytologie positiv geladene Objektträger zu verwenden, da dies die Adhärenz der Zellen verbessert und verhindert, dass sie sich während der technischen ICC-Verarbeitung ablösen und verloren gehen. Die Pap-Färbung beeinträchtigt die ICC nicht und es ist kein separater Entfärbungsschritt erforderlich . Konventionelle Zytologien sind Präparate mit hoher präanalytischer Variabilität (unterschiedliche Entnahmeverfahren, Konservierungs- und Transportmedien, Präparationsverfahren und Fixationslösungen). Daher sind in der Regel zytologiespezifische Anpassungen der Analysevariablen und eine separate Validierung der ICC-Protokolle erforderlich, da sich die präanalytische Aufbereitung deutlich von FFPE-Proben unterscheidet. Zahlreiche analytische Variablen können die Färbereaktion der Immunchemie beeinflussen, darunter die Sensitivität und Spezifität des primären Antikörpers, die Antikörperkonzentration, die Bedingungen für die Antigendemaskierung, die Sensitivität der Nachweismethode und die Kalibrierung der Färbung mit geeigneten Positivkontrollen. Im Vergleich zu IHC-Protokollen erfordern ICC-Protokolle für konventionelle zytologische Proben oft keine oder eine geringere Vorbehandlung, und oft muss die Antikörperverdünnung angepasst werden . Postanalytisch ist die Identifikation der zu untersuchenden Karzinomzellen auf dem Objektträger und die Anwendung adäquater Auswertekriterien entscheidend für eine zuverlässige ICC-Interpretation. Nicht-neoplastische Zellen, insbesondere Makrophagen, können mit verschiedenen Antikörpern reagieren. Dreidimensionale Zellverbände können eine unspezifische Immunfärbung im Zentrum der Verbände aufweisen . Degenerierte Zellen und Nekrosen können ebenfalls zu unspezifischen Färbereaktionen führen und sollten bei der Beurteilung nicht berücksichtigt werden . Erwähnenswert ist, dass die ICC die DNA nicht schädigt, sodass ICC-gefärbte Zytologiepräparate für weitere molekulare prädiktive Analysen verwendet werden können. Die Fluoreszenz-in-situ-Hybridisierung (FISH) ist an ICC-gefärbten Proben gut anwendbar, wenn 3‑Amino-9-Ethylcarbazol (AEC) als Chromogen verwendet wird, während 3,3-Diaminobenzidin (DAB) die FISH-Signale aufgrund einer Autofluoreszenz stark überlagert . Analytische Validierung und Qualitätskontrolle Zur Erstellung und Optimierung eines ICC-Protokolls sind Positiv- und Negativkontrollen erforderlich. Die Kontrollproben müssen auf die gleiche Weise verarbeitet und fixiert werden wie die klinischen Proben. Positive und negative Zytologiekontrollen können aus handelsüblichen Zellkulturen (erhältlich für ALK, ROS1 und PD-L1), aus übrig gebliebenen Ergussflüssigkeiten oder aus Frischgewebeabstrichen von Resektionspräparaten (Lungenkarzinomresektate, für PD-L1 auch Plazenta) hergestellt werden . Eine weitere Validierung sollte an einer Reihe von klinischen NSCLC-Proben erfolgen, indem die ICC-Ergebnisse mit dem Goldstandard verglichen werden. Dabei kann es sich um gepaarte histologische Proben mit validierten IHC-Ergebnissen oder mit Ergebnissen molekularer Analysen (z. B. für ALK, ROS1 oder pan-TRK) handeln. Im Allgemeinen gilt ein laborentwickelter Test (LDT) als technisch valide, wenn er mindestens 90 % Übereinstimmung mit dem Referenztest aufweist. Zytologiespezifische Empfehlungen für die analytische Validierung von prädiktiven ICC-Protokollen sind nicht verfügbar; es wurden jedoch allgemeine Empfehlungen für die analytische Validierung vorgeschlagen, um genaue und reproduzierbare immunchemische prädiktive Resultate zu gewährleisten . Zu diesen Empfehlungen gehört, dass für die anfängliche analytische Validierung eines neuen prädiktiven Protokolls mindestens 20 positive und 20 negative Kontrollen getestet werden sollten, was für ALK, ROS1 und pan-TRK unrealistisch ist, da die Prävalenz dieser onkogenen Fusionen bei NSCLC sehr tief ist. Die Verwendung eines ICC-Protokolls, welches bereits von einem anderen Labor validiert wurde, kann die Einführung eines neuen ICC-Tests vereinfachen. Da die präanalytischen Bedingungen weniger standardisiert sind als bei FFPE-Proben und die lokalen Bedingungen variieren können, ist eine Validierung erforderlich, um das Protokoll bei Bedarf auf die lokalen Bedingungen anzupassen. Nach Einführung eines neuen prädiktiven ICC-Tests in die klinische Diagnostik, bietet ein prospektives Monitoring der ICC-Ergebnisse (z. B. Prävalenz der PD-L1-Expressionsniveaus bei verschiedenen Grenzwerten und Prävalenz von ALK-, ROS1- und pan-TRK-positiven Ergebnissen) eine kontinuierliche Qualitätskontrolle und ist hilfreich, um Veränderungen in der analytischen Testleistung zu erkennen und um reproduzierbare Resultate sicherzustellen . So wird die Bioplaza-Onlineplattform beispielsweise von mehreren Pathologielabors in Europa genutzt, um ihre kodierten PD-L1-Ergebnisse prospektiv zu verfolgen und die Prävalenz positiver Ergebnisse mit dem jeweiligen nationalen Durchschnitt zu vergleichen . Die externe Qualitätskontrolle (EQC) ist ein wichtiges Instrument zur Qualitätssicherung. Derzeit gibt es nur einen EQC-Dienst (das UK NEQAS-Zytologiemodul) für die ICC an zytologischen Proben. Bisher bietet es nur Module für eine begrenzte Anzahl von ICC-Markern, wobei PD-L1, ALK oder ROS1 noch nicht beinhaltet sind. Dies unterstreicht die Notwendigkeit einer internen Qualitätskontrolle und einer Ausweitung der auf zytologische Proben zugeschnittenen EQC-Angebote .
Zytologische Proben können auf unterschiedliche Weise verarbeitet werden: als konventionelle Abstriche, Zytospinpräparate oder Flüssigzytologien sowie FFPE-Zellblöcke (CB) . Der Einfachheit halber werden alle nicht als CB verarbeiteten zytologischen Proben in diesem Artikel als konventionelle Zytologie bezeichnet. Ein FFPE-CB ist die einfachste zytologische Präparationsart für die ICC, da standardisierte Protokolle, welche für histologische Proben etabliert wurden, verwendet werden können. Im Allgemeinen sind die Ergebnisse der ICC an CB robust und reproduzierbar. Es sollte jedoch bedacht werden, dass es keine standardisierte Methode für die CB-Herstellung gibt. So werden z. B. unterschiedliche Entnahmemedien und Vorfixierungsmittel (Ethanol, Formalin, Methanol) verwendet, die die Immunreaktivität beeinflussen und Anpassungen des IHC-Protokolls erfordern können . CB sind aber nicht immer verfügbar und weisen häufig eine zu tiefe Zellularität auf. Eine kürzlich durchgeführte Umfrage unter europäischen Labors hat gezeigt, dass Papanicolaou(Pap)-gefärbte oder luftgetrocknete konventionelle Zytologien immer noch das wichtigste zytologische Material für die ICC sind . Eine ICC auf dem diagnostischen, Pap-gefärbten Zytologiepräparat stellt sicher, dass die zu untersuchenden Zielzellen vorhanden sind. Es wird empfohlen, in der Lungenzytologie positiv geladene Objektträger zu verwenden, da dies die Adhärenz der Zellen verbessert und verhindert, dass sie sich während der technischen ICC-Verarbeitung ablösen und verloren gehen. Die Pap-Färbung beeinträchtigt die ICC nicht und es ist kein separater Entfärbungsschritt erforderlich . Konventionelle Zytologien sind Präparate mit hoher präanalytischer Variabilität (unterschiedliche Entnahmeverfahren, Konservierungs- und Transportmedien, Präparationsverfahren und Fixationslösungen). Daher sind in der Regel zytologiespezifische Anpassungen der Analysevariablen und eine separate Validierung der ICC-Protokolle erforderlich, da sich die präanalytische Aufbereitung deutlich von FFPE-Proben unterscheidet. Zahlreiche analytische Variablen können die Färbereaktion der Immunchemie beeinflussen, darunter die Sensitivität und Spezifität des primären Antikörpers, die Antikörperkonzentration, die Bedingungen für die Antigendemaskierung, die Sensitivität der Nachweismethode und die Kalibrierung der Färbung mit geeigneten Positivkontrollen. Im Vergleich zu IHC-Protokollen erfordern ICC-Protokolle für konventionelle zytologische Proben oft keine oder eine geringere Vorbehandlung, und oft muss die Antikörperverdünnung angepasst werden . Postanalytisch ist die Identifikation der zu untersuchenden Karzinomzellen auf dem Objektträger und die Anwendung adäquater Auswertekriterien entscheidend für eine zuverlässige ICC-Interpretation. Nicht-neoplastische Zellen, insbesondere Makrophagen, können mit verschiedenen Antikörpern reagieren. Dreidimensionale Zellverbände können eine unspezifische Immunfärbung im Zentrum der Verbände aufweisen . Degenerierte Zellen und Nekrosen können ebenfalls zu unspezifischen Färbereaktionen führen und sollten bei der Beurteilung nicht berücksichtigt werden . Erwähnenswert ist, dass die ICC die DNA nicht schädigt, sodass ICC-gefärbte Zytologiepräparate für weitere molekulare prädiktive Analysen verwendet werden können. Die Fluoreszenz-in-situ-Hybridisierung (FISH) ist an ICC-gefärbten Proben gut anwendbar, wenn 3‑Amino-9-Ethylcarbazol (AEC) als Chromogen verwendet wird, während 3,3-Diaminobenzidin (DAB) die FISH-Signale aufgrund einer Autofluoreszenz stark überlagert .
Zur Erstellung und Optimierung eines ICC-Protokolls sind Positiv- und Negativkontrollen erforderlich. Die Kontrollproben müssen auf die gleiche Weise verarbeitet und fixiert werden wie die klinischen Proben. Positive und negative Zytologiekontrollen können aus handelsüblichen Zellkulturen (erhältlich für ALK, ROS1 und PD-L1), aus übrig gebliebenen Ergussflüssigkeiten oder aus Frischgewebeabstrichen von Resektionspräparaten (Lungenkarzinomresektate, für PD-L1 auch Plazenta) hergestellt werden . Eine weitere Validierung sollte an einer Reihe von klinischen NSCLC-Proben erfolgen, indem die ICC-Ergebnisse mit dem Goldstandard verglichen werden. Dabei kann es sich um gepaarte histologische Proben mit validierten IHC-Ergebnissen oder mit Ergebnissen molekularer Analysen (z. B. für ALK, ROS1 oder pan-TRK) handeln. Im Allgemeinen gilt ein laborentwickelter Test (LDT) als technisch valide, wenn er mindestens 90 % Übereinstimmung mit dem Referenztest aufweist. Zytologiespezifische Empfehlungen für die analytische Validierung von prädiktiven ICC-Protokollen sind nicht verfügbar; es wurden jedoch allgemeine Empfehlungen für die analytische Validierung vorgeschlagen, um genaue und reproduzierbare immunchemische prädiktive Resultate zu gewährleisten . Zu diesen Empfehlungen gehört, dass für die anfängliche analytische Validierung eines neuen prädiktiven Protokolls mindestens 20 positive und 20 negative Kontrollen getestet werden sollten, was für ALK, ROS1 und pan-TRK unrealistisch ist, da die Prävalenz dieser onkogenen Fusionen bei NSCLC sehr tief ist. Die Verwendung eines ICC-Protokolls, welches bereits von einem anderen Labor validiert wurde, kann die Einführung eines neuen ICC-Tests vereinfachen. Da die präanalytischen Bedingungen weniger standardisiert sind als bei FFPE-Proben und die lokalen Bedingungen variieren können, ist eine Validierung erforderlich, um das Protokoll bei Bedarf auf die lokalen Bedingungen anzupassen. Nach Einführung eines neuen prädiktiven ICC-Tests in die klinische Diagnostik, bietet ein prospektives Monitoring der ICC-Ergebnisse (z. B. Prävalenz der PD-L1-Expressionsniveaus bei verschiedenen Grenzwerten und Prävalenz von ALK-, ROS1- und pan-TRK-positiven Ergebnissen) eine kontinuierliche Qualitätskontrolle und ist hilfreich, um Veränderungen in der analytischen Testleistung zu erkennen und um reproduzierbare Resultate sicherzustellen . So wird die Bioplaza-Onlineplattform beispielsweise von mehreren Pathologielabors in Europa genutzt, um ihre kodierten PD-L1-Ergebnisse prospektiv zu verfolgen und die Prävalenz positiver Ergebnisse mit dem jeweiligen nationalen Durchschnitt zu vergleichen . Die externe Qualitätskontrolle (EQC) ist ein wichtiges Instrument zur Qualitätssicherung. Derzeit gibt es nur einen EQC-Dienst (das UK NEQAS-Zytologiemodul) für die ICC an zytologischen Proben. Bisher bietet es nur Module für eine begrenzte Anzahl von ICC-Markern, wobei PD-L1, ALK oder ROS1 noch nicht beinhaltet sind. Dies unterstreicht die Notwendigkeit einer internen Qualitätskontrolle und einer Ausweitung der auf zytologische Proben zugeschnittenen EQC-Angebote .
Eine Untersuchung des PD-L1-Status ist bei metastasierten NSCLC für alle histologischen Subtypen erforderlich, um Patientinnen und Patienten für eine Pembrolizumab-Monotherapie zu identifizieren . Bei metastasierten NSCLC mit hoher PD-L1-Expression (Tumor Proportion Score, TPS, ≥ 50 %) und negativem Nachweis zielgerichtet angehbarer onkogener Treiberalterationen ist die Pembrolizumab-Monotherapie die Erstlinienbehandlung der Wahl. Die PD-L1-Untersuchung kann an FFPE-Tumorgewebe mit hoch standardisierten, in klinischen Studien validierten, PD-L1-IHC-Assays durchgeführt werden oder mit weniger kostspieligen PD-L1-IHC-LDT, die nicht an einen spezifischen Färbeautomaten gebunden sind . Keiner der kommerziellen PD-L1-Assays wurde von den Herstellern für zytologische Proben validiert. Studienergebnisse zeigen, dass für FFPE-CB Histologie-standardisierte PD-L1-IHC-Protokolle gut verwendet werden können. Mehrere Studien zeigen übereinstimmende PD-L1-Ergebnisse mit gepaarten histologischen NSCLC-Proben sowohl für kommerzielle PD-L1-Assays als auch für LDT. Die Gesamtkonkordanzrate liegt dabei > 90 % bei Verwendung des klinisch relevanten TPS-Grenzwertes von 50 % . Publizierte Daten für konventionelle Zytologien sind noch begrenzt. Erste retrospektive Studien haben an Pap-gefärbten zytologischen Präparaten eine hohe Übereinstimmung der PD-L1-Ergebnisse mit gepaarten FFPE-Proben gezeigt, sowohl bei Verwendung des 22C3- als auch des SP263-Antikörperklons . Die Daten sind jedoch uneinheitlich, da nachfolgende Studien unterschiedliche Konkordanzraten aufwiesen, was wahrscheinlich auf eine unzureichende Validierung zurückzuführen ist . Für die Etablierung eines neuen PD-L1-ICC-Protokolles sind als Positivkontrollen Pap-gefärbte Zytospins der Karpas-299- (diffuse PD-L1-Färbung mit mittlerer Intensität) und der LNCap-Zelllinien (fokale PD-L1-Färbung mit schwacher Intensität) gut geeignet. Zur Validierung der PD-L1-ICC sollten die Positivkontrollen im klinischen Validierungsset (z. B. Pap-gefärbte NSCLC-Zytologieproben mit bekanntem PD-L1-IHC-Status gepaarter Histologien) das gesamte Spektrum der PD-L1-Färbeintensitäten bei unterschiedlichen Expressionsniveaus (einschließlich schwacher Färbung und geringem Anteil gefärbter Zellen) repräsentieren, um eine angemessene Kalibrierung der ICC zu gewährleisten. In der diagnostischen Routine am Universitätsspital Basel werden 35 % aller PD-L1-Untersuchungen bei NSCLC an Pap-gefärbten zytologischen Präparaten durchgeführt (berechnet aus > 1000 PD-L1-Ergebnissen, Daten nicht gezeigt). Die PD-L1-ICC entspricht einem SP263 LDT auf der BOND-MAX-Färbemaschine (Leica Biosystems GmbH, Deutschland). Als kontinuierliche QC-Maßnahme wird die Prävalenz der PD-L1-Resultate bei verschiedenen Expressionsgrenzwerten (TPS < 1 %, 1–49 % bzw. 50 %) prospektiv mit der Bioplaza-Onlineplattform überwacht. Die Prävalenz von NSCLC mit hoher PD-L1-Expression (TPS ≥ 50 %) ist dabei vergleichbar zwischen Biopsien (unter Verwendung des SP263-Assays) und Pap-gefärbten zytologischen Präparaten (30 % bzw. 27 %, p = 0,49) und entspricht den erwarteten Werten der publizierten Literatur. Diese Daten aus der klinischen Routinepraxis unterstreichen, dass die PD-L1-ICC an konventionellen Pap-gefärbten Präparaten zuverlässige Resultate für den klinisch relevanten Grenzwert (TPS ≥ 50 %) liefern kann. Eine kürzlich veröffentlichte nationale Studie aus den Niederlanden, bei der retrospektiv Daten aus der diagnostischen Routine ausgewertet wurden, zeigte jedoch eine größere Variabilität der PD-L1-Positivitätsraten zwischen verschiedenen Labors bei zytologischen Proben im Vergleich zu histologischen Proben (bei einem TPS-Grenzwert von 50 %). Dies untermauert die Notwendigkeit einer sorgfältigen Etablierung und Validierung von PD-L1-ICC-Protokollen sowie von Qualitätskontrollmaßnahmen . PD-L1-Immunzytochemie: Auswertekriterien Für eine zuverlässige Beurteilung des PD-L1-Status müssen mindestens 100 vitale Tumorzellen vorhanden sein. An CB gelten für Tumorzellen die gleichen Auswertekriterien wie für FFPE-Histologien. Eine Tumorzelle ist positiv für PD-L1, wenn eine teilweise oder vollständige Membranfärbung vorhanden ist, unabhängig von der Intensität der Färbung. Nekrotische Tumorzellen und eine zytoplasmatische Färbereaktion werden nicht berücksichtigt. Bei CB kann es schwierig sein, Tumorzellen von umliegenden nicht-neoplastischen Zellen, z. B. von Makrophagen, zu unterscheiden, was zu Fehlinterpretationen führen kann. Eine Immunfärbung mit einem Epithelmarker (z. B. TTF‑1, BerEp4) auf einem entsprechenden Schnitt kann bei der Identifizierung von Tumorzellen für die Bewertung der PD-L1-Expression nützlich sein. Makrophagen weisen zudem häufig eine membranäre PD-L1-Expression auf und können als interne Positivkontrolle dienen. Bei konventionellen Zytologien sind die Zellen auf den Objektträgern nicht angeschnitten und weisen daher eine intakte Zellmembran auf. Durch diese intakte Zellmembran ist die membranäre Immunfärbung weniger deutlich erkennbar und erscheint als diffuses „pseudozytoplasmatisches“ PD-L1-Färbemuster . In vielen Fällen ist jedoch eine membranäre Akzentuierung der Immunfärbung zu erkennen (Abb. ). Die PD-L1-Expression ist nicht auf Tumorzellen beschränkt, sondern wird auch von Immunzellen (IC) exprimiert. Eine PD-L1-Auswertung von tumorassoziierten IC ist an zytologischen Präparaten nicht zuverlässig möglich, da eine Unterscheidung zwischen tumorassoziierten IC und IC außerhalb des Tumorbettes nicht möglich ist. Beim NSCLC ist der PD-L1-Status jedoch nur für die Verordnung eines Medikaments, nämlich Pembrolizumab, verpflichtend notwendig und für den dafür benötigte TPS werden nur Tumorzellen in die Auswertung einbezogen.
Für eine zuverlässige Beurteilung des PD-L1-Status müssen mindestens 100 vitale Tumorzellen vorhanden sein. An CB gelten für Tumorzellen die gleichen Auswertekriterien wie für FFPE-Histologien. Eine Tumorzelle ist positiv für PD-L1, wenn eine teilweise oder vollständige Membranfärbung vorhanden ist, unabhängig von der Intensität der Färbung. Nekrotische Tumorzellen und eine zytoplasmatische Färbereaktion werden nicht berücksichtigt. Bei CB kann es schwierig sein, Tumorzellen von umliegenden nicht-neoplastischen Zellen, z. B. von Makrophagen, zu unterscheiden, was zu Fehlinterpretationen führen kann. Eine Immunfärbung mit einem Epithelmarker (z. B. TTF‑1, BerEp4) auf einem entsprechenden Schnitt kann bei der Identifizierung von Tumorzellen für die Bewertung der PD-L1-Expression nützlich sein. Makrophagen weisen zudem häufig eine membranäre PD-L1-Expression auf und können als interne Positivkontrolle dienen. Bei konventionellen Zytologien sind die Zellen auf den Objektträgern nicht angeschnitten und weisen daher eine intakte Zellmembran auf. Durch diese intakte Zellmembran ist die membranäre Immunfärbung weniger deutlich erkennbar und erscheint als diffuses „pseudozytoplasmatisches“ PD-L1-Färbemuster . In vielen Fällen ist jedoch eine membranäre Akzentuierung der Immunfärbung zu erkennen (Abb. ). Die PD-L1-Expression ist nicht auf Tumorzellen beschränkt, sondern wird auch von Immunzellen (IC) exprimiert. Eine PD-L1-Auswertung von tumorassoziierten IC ist an zytologischen Präparaten nicht zuverlässig möglich, da eine Unterscheidung zwischen tumorassoziierten IC und IC außerhalb des Tumorbettes nicht möglich ist. Beim NSCLC ist der PD-L1-Status jedoch nur für die Verordnung eines Medikaments, nämlich Pembrolizumab, verpflichtend notwendig und für den dafür benötigte TPS werden nur Tumorzellen in die Auswertung einbezogen.
Bei metastasierten NSCLC sind zielgerichtete Medikamente für ALK -, ROS1 -, NTRK - und neuerdings auch RET -Fusionen zugelassen. Insgesamt handelt es sich um seltene Alterationen mit einer Prävalenz von ALK -, ROS1 -, RET - und NTRK -Rearrangements von 3–5 %, 1–2 %, 1–2 % bzw. 0,2 % in kaukasischen Populationen . Bei onkogenen Genfusionen handelt es sich um Tyrosinkinasen, die aus strukturellen Umlagerungen auf DNA-Ebene resultieren. Wichtig ist, dass die Fusion die Funktion der Tyrosinkinase erhält und für die Überexpression des Fusionsproteins und die konstitutive Aktivierung der Kinase optimiert ist. Die immunchemisch nachgewiesene ALK-, ROS1- und pan-TRK-Expression ist ein Surrogat für die jeweiligen Rearrangements, da die Proteine dieser Gene in ihrer nativen Form im Wesentlichen nicht exprimiert werden. Obwohl sich angesichts der steigenden Anzahl therapierelevanter genetischer Alterationen bei NSCLC ein Upfront DNA- und RNA-basiertes Next Generation Sequencing (NGS) anbietet mit gleichzeitiger Untersuchung prädiktiver Mutationen, Amplifikationen und Fusionen, ist ein substanzieller Anteil kleiner NSCLC-Proben nicht für eine zusätzliche RNA-basierte NGS-Analyse ausreichend . Der DNA-basierte Nachweis von Rearrangements mittels Break-apart-FISH erfordert nur wenige Tumorzellen (meist nur 50), ist jedoch eine kostspielige Screeningmethode und erfordert ein hohes Maß an Fachwissen. Der immunchemische Nachweis von Fusionen auf Proteinebene ist eine breit verfügbare, kosteneffiziente Screeningmethode mit schneller Durchlaufzeit und kann einfach in prädiktive Testalgorithmen implementiert werden . Im Gegensatz zur PD-L1-Immunchemie, bei der die PD-L1-Expression eine kontinuierliche Variabel mit beträchtlicher Heterogenität darstellt, liefert die Immunchemie bei onkogenen Fusionen in der Regel eine schwarz-weißes Resultat. Die meisten NSCLC mit einem ALK -, ROS1 - oder NTRK -Rearrangement weisen ein deutlich sichtbares, diffuses immunchemisches Färbemuster der entsprechenden Proteine auf, was die Auswertung einfach macht. Da die Immunfärbung diffus und homogen über die Tumorzellen verteilt ist, reichen Proben mit nur 20 Tumorzellen für die Auswertung sowohl an histologischen als auch an zytologischen Proben aus . ALK Auf Grundlage vergleichender Studienergebnisse ist die ALK-IHC unter Verwendung des 5A4- (LDT) oder D5F3-Antikörpers (LDT oder Assay) eine gleichwertige Alternative zur ALK -FISH, die früher als Goldstandard für die ALK-Untersuchung galt. Ein positives ALK-IHC-Ergebnis ist ausreichend für eine Behandlung mit einem ALK-Tyrosinkinase-Inhibitor . Der automatisierte D5F3-ALK-Assay für BenchMark-Färbegeräte (Ventana Medical Systems, Inc., USA) kann die Einführung der ALK-IHC für FFPE-Proben erleichtern, da er keine umfangreiche lokale Revalidierung erfordert. Für FFPE-CB zeigen ALK-IHC-Protokolle mit 5A4 oder D5F3 eine gute Übereinstimmung der Resultate mit der ALK -FISH. Mehrere Studien zeigen im Vergleich zur FISH eine Sensitivität von 100 %, wobei die Spezifität etwas variabler ist und zwischen 83 und 100 % liegt . Die ALK-ICC kann an konventionellen zytologischen Proben eine hohe Übereinstimmung mit ALK -FISH- oder ALK-IHC-Ergebnissen von gepaarten histologischen NSCLC-Proben erzielen. Die Spezifitäten liegen zwischen 97 und 100 %. Die Sensitivitäten sind jedoch variabler (66–100 %), was die Notwendigkeit einer strengen Validierung und Qualitätskontrolle unterstreicht . Es sollte auch bedacht werden, dass FISH ein schwieriger Goldstandard ist mit einer Rate an berichteten falsch positiven Resultaten von > 10 %, und das in erfahrenen FISH Labors [ , , ]. Bei ALK-positiven NSCLC ist die ALK-Immunfärbung in der Regel zytoplasmatisch und diffus in allen Tumorzellen vorhanden und zeigt eine mittlerer bis starke Intensität (Abb. ). Im Gegensatz zur ALK-IHC an histologischen Proben sollte ein positives ALK-ICC-Ergebnis durch eine molekulare Methode bestätigt werden. Die Schwelle für die Einleitung einer molekularen Analyse sollte niedrig sein, um eine hohe Sensitivität zu gewährleisten. Selbst weniger als 20 positive Tumorzellen, unabhängig von der Färbeintensität, sollten als diagnostisch angesehen werden und eine Bestätigungsuntersuchung mittels FISH oder NGS auslösen. ROS1 Für ROS1 gibt es 2 verfügbare Antikörperklone, den D4D6 (Cell Signaling Technology, USA) und den später eingeführten SP384 (Ventana Medical Systems, Inc.). Im Gegensatz zu ALK gibt es für ROS1 keinen kommerziellen IHC-Assay. Die D4D6-ROS1-IHC ist hochsensitiv für den Nachweis von ROS1 -Rearrangements. Da die Spezifität im Vergleich zur ALK-IHC variabler ist, muss ein positives ROS1-IHC-Ergebnis durch eine molekulare Methode bestätigt werden, bevor eine Behandlung eingeleitet werden kann . FFPE-CB und Pap-gefärbte Zytologiepräparate sind für die ROS1-ICC geeignet [ , , ]. Vlajnic et al. wiesen an 295 prospektiven Pap-gefärbten zytologischen NSCLC-Proben identische Ergebnisse für die D4D6-ROS1-ICC und für molekulare Untersuchungen ( ROS1 -FISH oder RNA-basierter NGS) nach. Die ROS1-ICC detektierte 13 ROS1-positive NSCLC (Sensitivität und Spezifität von 100 %) . Die Merkmale der ROS1-ICC-Färbung sind mit der ROS1-IHC an histologischen Proben vergleichbar. Positive NSCLC zeigen in der Regel eine diffuse zytoplasmatische Färbung, obwohl die Färbung heterogen sein und die Intensität der Färbung zwischen den Tumorzellen variieren kann (Abb. ; ). Wie bei ALK sollte auch hier der Schwellenwert für die Einleitung einer molekularen Analyse niedrig gehalten werden. Unspezifische ROS1 Färbungen können bei nicht-neoplastischen Zellen, insbesondere bei reaktiven Typ-II-Pneumozyten und Makrophagen vorkommen . NTRK NTRK umfasst 3 Gene, NTRK1 , - 2 und ‑ 3 , die für die Transmembran-Rezeptor-Tyrosinkinasen TRKA, -B bzw. -C kodieren. Die Häufigkeit von NTRK -Fusionen ist bei NSCLC sehr gering (0,02 %) und kann alle 3 NTRK -Gene betreffen . Bei einer so tiefen Prävalenz ist die NTRK-FISH kein geeignetes Instrument für das Screening bei NSCLC, da für jedes der 3 NTRK -Gene 3 separate FISH-Tests erforderlich sind. TRKA, -B und -C weisen einen hohen Grad an Homologie zwischen den Kinasedomänen auf und können alle mit dem pan-TRK-Antikörperklon EPR17341 (Abcam, Cambridge, UK) nachgewiesen werden. Ein kommerzieller CE-IVD pan-TRK(EPR17341)-Assay für BenchMark-Färbeautomaten (Ventana Medical Systems, Inc.) ist für FFPE-Proben erhältlich. Es gibt noch keine veröffentlichten zytologiespezifischen Daten zur pan-TRK-ICC. Die Expression von TRKA, -B und -C in adulten Geweben ist auf neurale Komponenten (z. B. kortikales Gehirn) und die Hoden beschränkt. Diese Gewebe können zur Erstellung von Positivkontrollen für die Etablierung einer pan-TRK-ICC verwendet werden. Die TRK-Expression kann in Intensität und subzellulärer Lokalisierung variieren, ist aber in den meisten Fällen zytoplasmatisch. Eine nukleäre Expression wird typischerweise bei NTRK3 -Fusionen beobachtet . Eine TRK-Färbung in ≥ 1 % der Tumorzellen gilt als positives IHC-Ergebnis, da Tumore mit einer NTRK3 -Fusion eine sehr fokale oder schwache TRK-Färbung aufweisen können. Die Sensitivität der pan-TRK-IHC ist hoch für NTRK1 und NTRK2 (96,2 bzw. 100 %), aber nur 79,4 % für NTRK3 -Fusionen. NTRK3 -Fusionen können somit verpasst werden, sind aber äußerst selten bei NSCLC. Die Gesamtsensitivität und -spezifität der ICC beträgt bei NSCLC 87,5 % bzw. 100 % . Nach den aktuellen Empfehlungen muss ein positives pan-TRK-IHC-Ergebnis noch durch eine molekulare Methode (FISH oder RNA-basiertes NGS) bestätigt werden .
Auf Grundlage vergleichender Studienergebnisse ist die ALK-IHC unter Verwendung des 5A4- (LDT) oder D5F3-Antikörpers (LDT oder Assay) eine gleichwertige Alternative zur ALK -FISH, die früher als Goldstandard für die ALK-Untersuchung galt. Ein positives ALK-IHC-Ergebnis ist ausreichend für eine Behandlung mit einem ALK-Tyrosinkinase-Inhibitor . Der automatisierte D5F3-ALK-Assay für BenchMark-Färbegeräte (Ventana Medical Systems, Inc., USA) kann die Einführung der ALK-IHC für FFPE-Proben erleichtern, da er keine umfangreiche lokale Revalidierung erfordert. Für FFPE-CB zeigen ALK-IHC-Protokolle mit 5A4 oder D5F3 eine gute Übereinstimmung der Resultate mit der ALK -FISH. Mehrere Studien zeigen im Vergleich zur FISH eine Sensitivität von 100 %, wobei die Spezifität etwas variabler ist und zwischen 83 und 100 % liegt . Die ALK-ICC kann an konventionellen zytologischen Proben eine hohe Übereinstimmung mit ALK -FISH- oder ALK-IHC-Ergebnissen von gepaarten histologischen NSCLC-Proben erzielen. Die Spezifitäten liegen zwischen 97 und 100 %. Die Sensitivitäten sind jedoch variabler (66–100 %), was die Notwendigkeit einer strengen Validierung und Qualitätskontrolle unterstreicht . Es sollte auch bedacht werden, dass FISH ein schwieriger Goldstandard ist mit einer Rate an berichteten falsch positiven Resultaten von > 10 %, und das in erfahrenen FISH Labors [ , , ]. Bei ALK-positiven NSCLC ist die ALK-Immunfärbung in der Regel zytoplasmatisch und diffus in allen Tumorzellen vorhanden und zeigt eine mittlerer bis starke Intensität (Abb. ). Im Gegensatz zur ALK-IHC an histologischen Proben sollte ein positives ALK-ICC-Ergebnis durch eine molekulare Methode bestätigt werden. Die Schwelle für die Einleitung einer molekularen Analyse sollte niedrig sein, um eine hohe Sensitivität zu gewährleisten. Selbst weniger als 20 positive Tumorzellen, unabhängig von der Färbeintensität, sollten als diagnostisch angesehen werden und eine Bestätigungsuntersuchung mittels FISH oder NGS auslösen.
Für ROS1 gibt es 2 verfügbare Antikörperklone, den D4D6 (Cell Signaling Technology, USA) und den später eingeführten SP384 (Ventana Medical Systems, Inc.). Im Gegensatz zu ALK gibt es für ROS1 keinen kommerziellen IHC-Assay. Die D4D6-ROS1-IHC ist hochsensitiv für den Nachweis von ROS1 -Rearrangements. Da die Spezifität im Vergleich zur ALK-IHC variabler ist, muss ein positives ROS1-IHC-Ergebnis durch eine molekulare Methode bestätigt werden, bevor eine Behandlung eingeleitet werden kann . FFPE-CB und Pap-gefärbte Zytologiepräparate sind für die ROS1-ICC geeignet [ , , ]. Vlajnic et al. wiesen an 295 prospektiven Pap-gefärbten zytologischen NSCLC-Proben identische Ergebnisse für die D4D6-ROS1-ICC und für molekulare Untersuchungen ( ROS1 -FISH oder RNA-basierter NGS) nach. Die ROS1-ICC detektierte 13 ROS1-positive NSCLC (Sensitivität und Spezifität von 100 %) . Die Merkmale der ROS1-ICC-Färbung sind mit der ROS1-IHC an histologischen Proben vergleichbar. Positive NSCLC zeigen in der Regel eine diffuse zytoplasmatische Färbung, obwohl die Färbung heterogen sein und die Intensität der Färbung zwischen den Tumorzellen variieren kann (Abb. ; ). Wie bei ALK sollte auch hier der Schwellenwert für die Einleitung einer molekularen Analyse niedrig gehalten werden. Unspezifische ROS1 Färbungen können bei nicht-neoplastischen Zellen, insbesondere bei reaktiven Typ-II-Pneumozyten und Makrophagen vorkommen .
NTRK umfasst 3 Gene, NTRK1 , - 2 und ‑ 3 , die für die Transmembran-Rezeptor-Tyrosinkinasen TRKA, -B bzw. -C kodieren. Die Häufigkeit von NTRK -Fusionen ist bei NSCLC sehr gering (0,02 %) und kann alle 3 NTRK -Gene betreffen . Bei einer so tiefen Prävalenz ist die NTRK-FISH kein geeignetes Instrument für das Screening bei NSCLC, da für jedes der 3 NTRK -Gene 3 separate FISH-Tests erforderlich sind. TRKA, -B und -C weisen einen hohen Grad an Homologie zwischen den Kinasedomänen auf und können alle mit dem pan-TRK-Antikörperklon EPR17341 (Abcam, Cambridge, UK) nachgewiesen werden. Ein kommerzieller CE-IVD pan-TRK(EPR17341)-Assay für BenchMark-Färbeautomaten (Ventana Medical Systems, Inc.) ist für FFPE-Proben erhältlich. Es gibt noch keine veröffentlichten zytologiespezifischen Daten zur pan-TRK-ICC. Die Expression von TRKA, -B und -C in adulten Geweben ist auf neurale Komponenten (z. B. kortikales Gehirn) und die Hoden beschränkt. Diese Gewebe können zur Erstellung von Positivkontrollen für die Etablierung einer pan-TRK-ICC verwendet werden. Die TRK-Expression kann in Intensität und subzellulärer Lokalisierung variieren, ist aber in den meisten Fällen zytoplasmatisch. Eine nukleäre Expression wird typischerweise bei NTRK3 -Fusionen beobachtet . Eine TRK-Färbung in ≥ 1 % der Tumorzellen gilt als positives IHC-Ergebnis, da Tumore mit einer NTRK3 -Fusion eine sehr fokale oder schwache TRK-Färbung aufweisen können. Die Sensitivität der pan-TRK-IHC ist hoch für NTRK1 und NTRK2 (96,2 bzw. 100 %), aber nur 79,4 % für NTRK3 -Fusionen. NTRK3 -Fusionen können somit verpasst werden, sind aber äußerst selten bei NSCLC. Die Gesamtsensitivität und -spezifität der ICC beträgt bei NSCLC 87,5 % bzw. 100 % . Nach den aktuellen Empfehlungen muss ein positives pan-TRK-IHC-Ergebnis noch durch eine molekulare Methode (FISH oder RNA-basiertes NGS) bestätigt werden .
Zytologische Proben sollten für prädiktive Biomarkeranalysen genutzt werden, um Patientinnen und Patienten nicht dem unnötigen Risiko einer erneuten Probenentnahme auszusetzen. Immunzytochemische (ICC) Untersuchungen an zytologischen Proben sind gängige Praxis und für die Diagnose und prädiktive Biomarkeranalysen unverzichtbar geworden. An FFPE-Zellblöcken (formalinfixiert und paraffineingebettet, FFPE) kann die prädiktive ICC in der Regel zuverlässig mit standardisierten Protokollen, die für histologische Proben entwickelt wurden, durchgeführt werden. Da sich konventionelle zytologische Proben deutlich von FFPE-Proben unterscheiden, erfordert die Etablierung von prädiktiven ICC-Protokollen meist eine zytologiespezifische Anpassung der analytischen Variablen und eine separate Validierung. Qualitätskontrollmaßnahmen sind von entscheidender Bedeutung, um eine hohe Qualität prädiktiver ICC-Ergebnisse zu gewährleisten.
|
Causes of death of forensic autopsy cases tested positive for COVID-19 in Tokyo Metropolis, Japan | 69a34596-e1a5-4536-b328-0d0d7cce1997 | 9940469 | Forensic Medicine[mh] | Introduction Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has spread rapidly across the globe since the end of 2019, and is responsible for 629 million infections and 6.5 million deaths as of October 28, 2022 . Despite the decrease in mortality rate and severity of COVID-19 due to the development of vaccines and therapeutics, some individuals deteriorate rapidly and develop acute respiratory distress syndrome (ARDS) . In Japan, the number of patients increased dramatically in January and August 2021, and July 2022, making medical resources scarce. As a result, even older patients or those with underlying diseases are forced to remain at home if they do not have severe symptoms, leading to the appearance of out-of-hospital sudden death . Postmortem computed tomography (PMCT) revealed severe pneumonia as a cause of death in some cases; however, PMCT has not always been performed for deaths occurring outside of the hospital. In addition, the cautious stance on performing autopsies on COVID-19 positive cases has hampered an assessment of the causative relationship between COVID-19 and the cause of out-of-hospital deaths in Japan. Several case reports in which pathological autopsies were performed have been published; however, these reports are mostly single case reports and do not include out-of-hospital or non-natural death , , , , , , . There are few reports of forensic autopsy cases in Japan at the time of writing this manuscript . Therefore, the profile of COVID-19 related deaths that occurred outside the hospitals in Japan remains unclear. Tokyo Metropolis is a metropolitan prefecture, and the medical examiner system has been implemented in its special wards. All medicolegal deaths, including natural, non-natural, and undetermined manner of death, that occurred in the special wards of Tokyo Metropolis are reported to the Tokyo Medical Examiner’s Office. Medical examiners performed postmortem examinations to determine the manner and cause of death in these cases. Medical examiners also observed that the deceased tested positive for SARS-CoV-2, therefore making them cases suspected of COVID-19. They evaluated the results of situational investigation, external examination, viral RNA test, and PMCT and performed an autopsy if the cause of death was undetermined. In this study, we investigated forensic autopsied cases that tested positive for COVID-19 handled by medical examiners in the Tokyo Metropolis to clarify the profile of COVID-19 related deaths that occurred outside the hospital in Japan. In addition, we investigated whether there is a difference in the involvement of COVID-19 in the cause of death between cases before and after 2022, when the omicron variant became dominant.
Materials and methods From April 1, 2020, to July 31, 2022, 41 autopsies of persons who tested positive for SARS-CoV-2 ante- or postmortem were performed at the Tokyo Medical Examiner’s Office. Among the 41 patients, 8 (19.5%) died in 2020, 14 (34.1%) died in 2021, and 19 (46.3%) died in 2022. All documents available concerning the deaths of these cases (medical history, police investigation reports, death certificates, PMCT, and autopsy reports) were evaluated descriptively. We examined age, sex, medical history, PMCT findings, autopsy findings, cause of death, and the causal relationship between death and COVID-19. Autopsy findings included macroscopic findings, histopathological findings, and toxicological analyses. Histopathological examination was performed in all cases, and blood ethanol levels were measured in all cases except for case (No. 19). Toxicological analysis was performed for 21 cases (No. 1,2,5,7,8,13,15–17,20,21,23–25,27,30,32–34,37,41). For the viral RNA test, nasopharyngeal swab samples were subjected to a transcription-reverse transcription concerted reaction using TRC Ready-80 (Tosoh Techno-System, Tokyo, Japan), except for two cases in which COVID-19 had already been diagnosed before death (No. 15, 20). Regarding causal relationships, we divided the cases into three groups (death directly related to COVID-19, death indirectly related to COVID-19, and death unrelated to COVID-19), according to the death certificates and autopsy findings. The proportion of deaths directly related to COVID-19 was compared between the cases that occurred before and after 2022. Whole-body PMCT was performed before autopsy in all cases using a 64-row CT scanner (Somatom Definition AS; Siemens Healthcare, Forchheim, Germany) with the following parameters: 120 kV; quality reference, 400 mAs; thickness, 64 × 0.6 mm. Image data were analyzed using the syngo.via software (Siemens Healthcare). Lung patterns were evaluated and divided into two subcategories: ground glass opacities (GGO) and consolidation. The location of each lung pattern (diffuse, patchy, hypostatic, anterior, or posterior) was also determined. The Ethics Committee of the Tokyo Medical Examiner’s Office approved the study protocol and data use (approval number: 2020–3).
Results ( ) The mean age of the deceased individuals was 58.0 years (range: 28–96 years), and the most frequent age group was 50–59 years (n = 10), followed by 70–79 years (n = 8), 60–69 years (n = 8) and 40–49 years (n = 8). The study sample consisted of 33 males (80.5%) and 8 females (19.5%). Twenty-seven cases (65.9%) had a medical history, and the most frequent diseases were hypertension (n = 7), diabetes (n = 7), mental health disorders (n = 5), and cardiac diseases (n = 3). The mean body mass index (BMI) was 24.8 kg/m 2 . Approximately half of the cases (19 cases, 46.3%) had a BMI ≧25.0, and 7 cases (17.1%) had a BMI ≧30. The manner of death was categorized as natural in 32 cases and non-natural in 9 cases. Causes of death due to natural causes included pneumonia (n = 17), myocarditis (n = 5), laryngotracheobronchitis (n = 3), ischemic heart disease (n = 2), dehydration (n = 1), emphysema (n = 1), liver cirrhosis (n = 1), rectal cancer (n = 1), and peritonitis (n = 1). Natural deaths included 26 cases in which COVID-19 was directly related to death, including pneumonia ( ), myocarditis ( ), and laryngotracheobronchitis ( ) (No. 1–6, 10–21, 25, 26, 29–31, 33, 36, 37). There were three cases in which COVID-19 was indirectly related to death (No. 22, 24, 28). The main pathology in these cases was alcoholic liver cirrhosis, severe cardiomegaly with old myocardial infarction, and emphysema. There were 3 cases in which COVID-19 was not related to death (No. 34, 35, 40), and causes of death in those cases were ischemic heart disease, peritonitis due to perforation of the ileum, and rectal cancer. Causes of death due to non-natural causes (n = 9) included drowning (n = 2), choking (n = 2), psychiatric drug poisoning (n = 1), acute alcohol intoxication (n = 1), heat stroke (n = 1), acute subdural hematoma (n = 1), and multiple injuries (n = 1) (case 7–9, 23, 27, 32, 38, 39, 41). All non-natural deaths were unrelated to COVID-19, except for one case (No. 32). Among the participants, there were 26 deaths directly related to COVID-19 (63.4%), 4 deaths were indirectly related to COVID-19 (9.8%), and 11 deaths were unrelated to COVID-19 (26.8%). The proportion of deaths directly related to COVID-19 was higher in cases before 2022 (81.8%) than in those post 2022 (42.1%). Lung pathology related to COVID-19 was observed in 25 cases, including diffuse alveolar damage (DAD) in a wide range of bilateral lungs (n = 17) ( b), partial hyaline membrane along the alveolar wall and infiltration of lymphocytes/macrophages in the alveolar septum (n = 4), and focal infiltration of neutrophils in the alveolar space (n = 4). All deaths from pneumonia in this study sample showed DAD in a wide range of lungs. Thrombi were observed macroscopically in the peripheral branch of the pulmonary arteries in one case, and thrombi were observed microscopically in the small arteries of the lungs in 6 cases ( d). Representative autopsy findings, other than lung pathology, included cardiomegaly (n = 20), fatty liver (n = 12), coronary sclerosis (n = 7), and nephrosclerosis (n = 4). All deaths from pneumonia (n = 17) showed diffuse GGO and/or consolidation, except for one case (case 19; decomposition) ( a, c). A crazy-paving pattern was observed in 8 cases. Regarding PMCT findings in other cases (n = 24), patchy or localized GGO and/or consolidation was observed in 16 cases, only hypostatic changes were seen in 2 cases, and diffuse GGO and/or consolidation was observed in 6 cases.
Discussion It has been reported that the presence of some chronic diseases besides COVID-19 pneumonia is a poor prognostic criterion and increases mortality , , . Chen et al. reported that hypertension, cardiovascular disease, and diabetes were more prevalent among COVID-19 patients who died than among survivors . Zhou et al. reported that hypertension, diabetes, coronary heart disease, chronic renal disease, and COPD were more frequent among non-survivors than among survivors . Patients with obesity are reported to have a high risk of mortality from COVID-19 . The major comorbidities detected in this study (e.g., hypertension, diabetes, cardiac diseases, obesity, cardiomegaly, fatty liver, coronary sclerosis, and nephrosclerosis) also tracked those in a previous study. Several studies have investigated the causes of death in consecutive forensic autopsy of COVID-19 positive cases , , , , , , . Edler et al. investigated 80 autopsy cases of death with SARS-CoV-2 infection and reported that pneumonia was the cause of death in all cases of definite COVID-19 deaths (71%) and that pneumonia was present in all cases of probable COVID-19 deaths (12.5%). Four deaths were defined as non-COVID-19 deaths with virus-independent causes (5%). They also reported that DAD was observed histologically in 8 of 12 cases they evaluated and that pneumonia was combined with fulminant pulmonary thromboembolism (PE) in 8 cases . Romanova et al. investigated forensic (n = 60) and clinical (n = 42) autopsies with positive postmortem SARS-CoV-2 PCR results. According to their results, COVID-19 caused or contributed to death in 71% of the clinical cases and 83% of the forensic autopsies. Regarding the cause of death, the vast majority of fatalities were related to DAD, and lymphocytic myocarditis was a rare finding (n = 2) . Arslan et al. analyzed COVID-19 positive cases (n = 26), and the cause of death was determined to be viral pneumonia in 21 cases, blunt trauma in 4 cases, and hanging in one case. They also reported that DAD was prominent and the main pathology was pneumonia in autopsied cases (n = 7) . Muchelenganga et al. investigated 21 COVID-19 autopsy cases and reported that PE (n = 16), DAD (n = 3), and pneumonia (n = 2) were the common causes of death . Fanton et al. reported four COVID-19 related out of hospital cardiac arrest, and the cause of death in three individuals was acute respiratory failure due to DAD, while violent death due to suicidal acute alcohol intoxication was the cause of death in one case . Keresztesi et al. investigated 15 autopsy cases and reported that massive bilateral pneumonia was the direct cause of death in 13 cases. The causes of death in the other two cases were pulmonary carcinoma and bronchopneumonia following femur fracture. DAD was observed histologically in five out of seven cases that they evaluated . Danics et al. divided their 100 autopsy cases into three mortality categories by relevance of COVID-19 infection: strong association (n = 57), contributive association (n = 27) and weak association (n = 16), and lung pathology was the primary cause of death in the strong and contributive categories . In this study, the most frequent cause of death was pneumonia, and the most common lung histopathology was DAD. These features are similar to the results of the previous studies mentioned above; however, the proportion of deaths from pneumonia in this study (41.5%) was not high compared with previous studies (e.g., 86.3% , 80.8% , and 86.7% ). In addition, this study showed that the causes of death from COVID-19 were more varied, including pneumonia, myocarditis, laryngotracheobronchitis, and dehydration (i.e., emaciation due to COVID-19). In this study, there were three cases of laryngotracheobronchitis as the cause of death. Unlike previous forensic autopsy studies, our sample included deaths occurring in 2022 (n = 19, 46.3%), when the omicron variant spread quickly globally, replacing the delta variant. All deaths due to laryngotracheobronchitis in this study occurred in February 2022. Compared with previous variants, the omicron variant showed milder lower respiratory tract symptoms and olfactory or taste disturbances; however, upper airway symptoms, such as sore throat, rhinorrhea, and sneezing, were reported more frequently . Piersiala reported a case series of COVID-19 positive patients with acute odynophagia, severe sore throat, and fever. All the patients developed COVID-19 associated acute laryngitis and/or pharyngitis . In Japan, multiple cases of COVID-19 induced upper airway stenosis and related acute laryngitis were reported by the Japanese Society of Otorhinolaryngology Head and Neck Surgery in late February 2022 . In our case, dehydration (No. 29, 30), pneumonia (No. 30), and airway stenosis (No. 33) may have been accompanied by laryngotracheobronchitis, resulting in death. On the other hand, several studies showed milder severity of the omicron variant compared to the previous variant , . Regarding the causal relationship between death and COVID-19 in our sample, the proportion of deaths directly related to COVID-19 was higher in cases before 2022 (81.8%) than in those after 2022 (42.1%). This result suggests that the proportion of deaths directly related to COVID-19 might have decreased after the appearance of the omicron variant, although further studies are needed. PMCT findings correlate with the severity of COVID-19 lung disease and have been proposed as a useful screening tool to identify COVID-19 related fatalities , , . All cases of death from pneumonia in this study, except for one case (severe decomposition), showed findings characteristic of COVID-19 pneumonia, such as diffuse GGO, crazy paving, and areas of consolidation , , , , . The use of PMCT in conjunction with postmortem RNA testing could be considered a reliable and safe modality for confirming COVID-19 pneumonia. However, reliable diagnosis of fatal COVID-19 can only be established by a combination of clinical, radiologic, microbiologic, and histopathologic correlations, with the latter two having the most discerning diagnostic value . Indeed, GGO and/or consolidation were seen on PMCT in many cases of death unrelated to COVID-19, although they are not typical of COVID-19 pneumonia. Furthermore, COVID-19 related extra-pulmonary manifestation, such as myocarditis/epicarditis and laryngotracheobronchitis, in this study are difficult to prove by PMCT, although pericardial effusion on PMCT might hint at the diagnosis of myocarditis/epicarditis. In Japan, physicians have to report cases of COVID-19 and cases suspected of dying from COVID-19 to the public health center according to the Infectious Diseases Control Law. However, reported cases may include not only definite COVID-19 deaths, but also cases dying with COVID-19, including deaths unrelated to COVID-19 because differentiating between deaths with COVID-19 and death from COVID-19 is difficult, especially out-of-hospital death in which ante-mortem information is limited. The clinical course of COVID-19 is highly heterogeneous, and the deceased may have pre-existing diseases that may contribute to or even cause death . In addition to microbiological testing and PMCT, autopsy and histological analysis of COVID-19 positive cases plays a crucial role in assessing the causative relationship between death and COVID-19. The results of this study suggest that the proportion of definite COVID-19 deaths among SARS-CoV-2 positive cases might have decreased, and the causal relationship might have been more valid after the appearance of the omicron variant, which emphasizes the need for autopsying COVID-19 positive cases to obtain accurate mortality statistics. This study had several limitations. First, the sample size was small, and the data were collected from an area inhabited by approximately 7% of the total Japanese population. Therefore, these results may not be generalizable to the entire Japanese population. Second, we selected only autopsied cases (41 of 365 SARS-CoV-2 positive cases; 11.2%) for this study because detailed investigation (e.g., causal relationship between death and COVID-19) was impossible in non-autopsied cases. Therefore, we cannot deny that there was selection bias in this study. Further large-scale studies are needed to address these limitations. In conclusion, our study investigated forensic autopsy cases that tested positive for COVID-19 in Tokyo Metropolis and revealed that the leading cause of death was pneumonia, similar to previous studies; however, the causes of COVID-19 related death were various including myocarditis, laryngotracheobronchitis, and emaciation due to COVID-19. Furthermore, the proportion of deaths directly related to COVID-19 was lower in cases after 2022 when the omicron variant was dominant. Viral mutations may affect the pathology and mortality statistics. The results of this study further emphasize the need for autopsying COVID-19 positive cases to obtain accurate mortality statistics because physicians should consider more differential diagnoses in the phase of the omicron variant.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
Editorial: Invasive fungal disease in the immunocompromised host/Research Topic proceedings of the mycology 2021 meeting | 3bab051c-c12c-4b48-9270-4606d3d23e4e | 9558226 | Microbiology[mh] | All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
Obstetrician–gynecologists’ perspectives towards medication use during pregnancy: A cross-sectional study | 2f006442-a761-4195-b279-ef496fa8aedf | 9678598 | Gynaecology[mh] | Pregnant women undergo unique physiological changes that may affect the pharmacokinetic properties of various medications. Around 40% of pregnant women uses either over-the-counter (OTC) or prescribed medications during their pregnancy to treat chronic or acute conditions, such as nausea, vomiting, diabetes, asthma, and hypertension. Pharmacological agents contribute to significant, preventable congenital abnormalities, leading to a rise in public health concerns about using medications during pregnancy. To produce such an effect, the medication must possess certain properties that allow it to cross the placenta, including but not limited to being unbound, weak base, lipid-soluble, and having a low molecular weight. Also, the fetus’s stage of development is a crucial point to consider when using medication during pregnancy. Most pregnant women know that medication use during pregnancy is paramount, which leads them to seek medical advice before taking any medication. A vast majority of studies evaluated pregnant women’s knowledge and attitudes towards using medicines during their pregnancy. One of which was conducted in Saudi Arabia in 2014, which concluded that women claim to receive inadequate medication-related information from physicians and pharmacists; instead, they rely on medication leaflets to attain such information. Obstetrician–gynecologists are frequently faced with inadequate and imprecise information to make decisions for clinical management. Although some medications’ teratogenicity potential is well known, there is limited information on the safety of many other medications used during pregnancy due to ethical considerations. Pregnant and lactating women are typically excluded from clinical trials. A study published in 2010 in the United States examined Obstetrician–gynecologists’ knowledge and informational resources regarding the safety of medication use during pregnancy. Results showed that the number of years in practice was associated with their response choice to medication safety questions. Most responders indicated sufficient access to helpful information regarding medication teratogenicity potential. However, more than half of the participants selected the lack of a single comprehensive source of information as the most significant barrier. Another study evaluating community pharmacists’ knowledge about medication safety during pregnancy in Saudi Arabia found a significant difference between age groups and country of graduation in knowledge test scores. To the best of our knowledge, no studies were conducted to spot the knowledge of Obstetrician–gynecologists in Saudi Arabia and their access to information about the risks of medication use during pregnancy. Such a study is highly warranted due to the physicians’ knowledge and practice’s effect on the patients’ health. For that, this study aims to assess Obstetrician–gynecologists’ knowledge of the medication teratogenicity potential, their frequently used resources, and their residency training contribution to medication use during pregnancy. The present study is a cross-sectional, survey-based study targeting licensed obstetrician-gynecologists practising in Saudi Arabia. Saudi and non-Saudi practitioners were eligible to fill out the questionnaire. Over 6 months, data were collected using a validated self-administered web-based questionnaire developed by the American College of Obstetricians and Gynecologists. The questionnaire is organized into 5 domains. The first domain (7 items) includes the participants’ demographic data. The second domain focused on assessing the knowledge about prescription medications, OTC, dietary supplements, and herbal products in the first trimester (23 items). The third domain was about the references used to obtain appropriate and updated information on medication use during pregnancy (15 items). The fourth domain was to demonstrate the physician’s attitudes toward medication use during pregnancy (6 items). The last domain was regarding the rating of the participant’s training in medication use during pregnancy (6 items). The questions utilized in the questionnaire included multiple choice, check all that apply, Likert-like scale, and fill-in-the-blank questions. With almost 350 clinicians registered as Obstetrician–gynaecologist specialists or consultants in Saudi Arabia, the sample was calculated to be 184 with a 95% confidence interval and 5% confidence level) as follow: S S = [ Z 2 p ( 1 − p ) ] / C 2 = [ ( 1.96 ) 2 × 0.5 ( 1 − 0.5 ) ] / ( 0.05 ) 2 = 384.16 S S / [ 1 + { ( S S 1 ) / P o p } ] = 384.16 / [ 1 + { ( 384.161 ) / 350 } ] = 184 King Saud University Medical City’s Institutional Review Board approved this study (19/0929). Following ethical approval, an online survey was sent to the department of Obstetrics & Gynecology in 6 large hospitals around the Kingdom to be distributed among their employees. Reminders were sent to non-responders, and visits were conducted to some sites with low response rates. Data were analyzed using SPSS version 25. Categorical variables were presented as numbers and percentages, while continuous variables were presented as mean and SD if normally distributed. However, if not normally distributed, median and IQR were used. Shapiro–Wilk test was used to assess for normal distribution. Analyses were tested for significance using an α of 0.05. A total of 60 obstetrician–gynecologists, completed the survey, with a response rate of 33%. The flowchart for the inclusion and exclusion process is shown in Figure . Most participants were female (72%), with a median age of 42. The median years of practice among the participants were 13 years. Around 40% were full-time hospital practitioners, and most (85%) were working in the central region (i.e., Riyadh). Seventy per cent of the participants reported providing routine care/gynecologic exams. Characteristics of participants included in the study are presented in Figure and Supplemental Digital Content (Appendix 1, http://links.lww.com/MD/H763 ). 3.1. Assessment of medication use during the first trimester of pregnancy Participants’ assessment of 23 selected medications regarding fetus safety if taken during the first trimester is presented in Supplemental Digital Content (Appendix 2, http://links.lww.com/MD/H764 ). Regarding prescription medications (Fig. ), the majority (87%) agreed that Isotretinoin is contraindicated. However, 8.3% of them were not sure. For Alprazolam, 25% considered it unsafe, 35% indicated that it required a risk-benefit assessment, and 30% were unsure. Most participants (76.7%) consider acetaminophen safe to use. Regarding dietary supplements (Fig. ), 75% stated that vitamin A supplements are not safe during the first trimester. Around 2-thirds (60%) of respondents were unsure about the safety of herbal remedies during pregnancy. 3.2. Information resources utilized by obstetrician-gynecologists Regarding the information resources used to answer questions, online databases (e.g., Lexi and Micromedex) were chosen as the top resources utilized by obstetrician-gynecologists to obtain information about the teratogenicity of medications (45%), followed by pharmacist consultation, FDA label, and colleagues’ conversation (21.7%). Further information is provided in Table . 3.3. Obstetrician–gynecologists’ attitudes toward medication use during pregnancy A Likert-Like scale was used to assess the proportion of obstetrician-gynecologists agreeing or disagreeing with various statements related to the information on the use of medications during pregnancy. Forty-eight per cent strongly agreed that liability is a concern if there were to be an adverse pregnancy outcome following the use of medications. Additionally, 41% agreed on the lack of sufficient information about the safety of medication use during pregnancy, while 31% reported a lack of accessibility to the available information. Interestingly, 26.7% reported a lack of time to communicate the information available to patients as one of the drawbacks. Additional details are provided in Table . 3.4. Obstetrician–gynecologists’ rating of their training Participants were asked to rate their training on medication use during pregnancy, and the results are presented in Table . Those who had been in practice for more than 15 years were significantly more likely to rate themselves as well qualified ( P -value < 0.05). The majority adequately and significantly rated their training on prescribed medications (58.3%), OTC medications (45%) and dietary supplements or herbal remedies (32%) ( P value < .05). Participants’ assessment of 23 selected medications regarding fetus safety if taken during the first trimester is presented in Supplemental Digital Content (Appendix 2, http://links.lww.com/MD/H764 ). Regarding prescription medications (Fig. ), the majority (87%) agreed that Isotretinoin is contraindicated. However, 8.3% of them were not sure. For Alprazolam, 25% considered it unsafe, 35% indicated that it required a risk-benefit assessment, and 30% were unsure. Most participants (76.7%) consider acetaminophen safe to use. Regarding dietary supplements (Fig. ), 75% stated that vitamin A supplements are not safe during the first trimester. Around 2-thirds (60%) of respondents were unsure about the safety of herbal remedies during pregnancy. Regarding the information resources used to answer questions, online databases (e.g., Lexi and Micromedex) were chosen as the top resources utilized by obstetrician-gynecologists to obtain information about the teratogenicity of medications (45%), followed by pharmacist consultation, FDA label, and colleagues’ conversation (21.7%). Further information is provided in Table . A Likert-Like scale was used to assess the proportion of obstetrician-gynecologists agreeing or disagreeing with various statements related to the information on the use of medications during pregnancy. Forty-eight per cent strongly agreed that liability is a concern if there were to be an adverse pregnancy outcome following the use of medications. Additionally, 41% agreed on the lack of sufficient information about the safety of medication use during pregnancy, while 31% reported a lack of accessibility to the available information. Interestingly, 26.7% reported a lack of time to communicate the information available to patients as one of the drawbacks. Additional details are provided in Table . Participants were asked to rate their training on medication use during pregnancy, and the results are presented in Table . Those who had been in practice for more than 15 years were significantly more likely to rate themselves as well qualified ( P -value < 0.05). The majority adequately and significantly rated their training on prescribed medications (58.3%), OTC medications (45%) and dietary supplements or herbal remedies (32%) ( P value < .05). To our knowledge, this is the first study in the nation that assesses Obstetrician–gynecologists’ knowledge of medications’ teratogenicity potential as well as the impact of their residency training on their decisions. The resources routinely used were also assessed. For a medication to be desirable, it must fulfill the following criteria: safe, effective, and indicated. During pregnancy, women should refrain from taking medications as much as possible due to the teratogenicity risk. However, certain medical conditions require urgent or ongoing treatment, and deciding to use them is not without apprehension. Thus, obstetrician-gynecologists play a vital role in identifying when medications are warranted and which are safe to be given during each trimester, in addition to adequately counseling patients. To assist in decision-making, the Food and Drug Administration (FDA) formerly stratified the medications’ teratogenic effects into 5 categories (i.e., A, B, C, D, and X), possessing fewer safety profiles when moving downwards. However, it is challenging to assess the risk-benefit ratio using this classification. In 2015, the FDA updated their pregnancy and lactation rule to overcome this issue. Nevertheless, even with the new FDA stratification, it is extremely challenging for physicians to make treatment decisions in this population. That is due to the diversity in fetal damage manifested in the same medication when taken at different trimesters, and the exclusion of pregnant women from clinical trials due to ethical considerations, leaving great uncertainty. Therefore, safety information is commonly obtained from other sources such as animal experiments, nonclinical data, case reports, and epidemiological data, of which possess abundant limitations, adding to the ambiguity of treatment decisions in this population. In this study, the participant’s level of knowledge regarding medication teratogenicity potential was assessed and revealed a great variation. Most respondents reported inaccessibility to current information about medication teratogenicity risk and a lack of sufficient data, emphasizing the need for updated, accessible references to aid clinical decisions. A multidisciplinary team including clinical pharmacists in the services of Obstetrics and Gynecology as medication specialists would be of great benefit. Clinical pharmacists’ contributions to the field were reported in the literature, highlighting their role in preventing the incidence of toxicity and death. Their expertise allows them to help select appropriate medications and adequately counsel patients regarding the safety of different treatment modalities, dietary supplements, and herbals. That was supported by previous evidence, where they found clinical pharmacy services in Obstetrics and Gynaecology were associated with a high level of physician satisfaction and better patient care. When assessing participants’ knowledge about the safety of medications in the first trimester, the vast majority reported that Isotretinoin is contraindicated and acetaminophen is safe, which is consistent with the published literature. On the contrary, results varied with Alprazolam. That may be attributed to the weak level of evidence and lack of consensus on its effect on the fetus. Nevertheless, since Alprazolam falls into Category D and may be detrimental to the fetus, prospective studies with a large sample size to assess its effect may be difficult to conduct. Moreover, 75% of responders stated that Vitamin A dietary supplements are not safe in the first trimester, which is far higher than a study conducted amongst community pharmacists, in which 48.4% reported it unsafe. As for the safety of herbals, participants showed a lack of sufficient knowledge of their use in this patient population. This uncertainty is alarming as the use of herbal medicine prevalence in pregnant women in the Middle East ranges from 7% to 55%. These medications may harm the mother and child; thus, healthcare practitioners’ education is essential in this regard as it also contributes to proper patient education. Several limitations exist in our study. The response rate remained low despite many reminders and visits to our participants. That may be justified by the Obstetrician–gynecologists’ high-load nature of practice and busy service, hindering the data collection process. In addition, most responders were from the central region, affecting the results’ generalizability. Since the study used self-administrated questionnaires, desirability bias may arise. It is also important to note that there was no way of determining whether or not responders used their actual knowledge or used reference sources when filling out the questionnaire. A nationwide, paper-based study is recommended to overcome the limitations mentioned above and confirm the results of this study. Our study found that Obstetrician–gynecologists vary in their knowledge about medication and herbal remedies’ teratogencity risk. These findings highlighted the need to emphasize this during their training year and the importance of having this information readily available to health care providers in an updated form. This work was supported by the College of Prince Sultan Bin Abdulaziz for Emergency Medical Services Research Center, Deanship of Scientific Research, King Saud University, Riyadh, Saudi Arabia. Conceptualization: Mashael Alshebly and Sultan Alghadeer. Data curation: Bana Almadi. Formal analysis: Abdullah M. Mubarak. Funding acquisition: Sultan Alghadeer. Investigation: Haya Alturki and Jeelan Alghaith. Methodology: Sultan Alghadeer. Supervision: Mashael Alshebly and Sultan Alghadeer. Validation: Mashael Alshebly and Abdulrahman Alwhaibi. Visualization: Mashael Alshebly and Abdullah M. Mubarak. Writing – original draft: Haya Alturki and Jeelan Algaith. Writing – review and editing: Bana Almadi and Abdulrahman Alwhaibi. |
Comprehensive geriatric assessment for predicting postoperative delirium in oral and maxillofacial surgery: a prospective cohort study | 874d6058-9c49-463d-8e54-bb09651ebe14 | 11554771 | Dentistry[mh] | Advancements in medicine allow for complex procedures in advanced-age patients who are frail and vulnerable. Chronological age alone cannot effectively gauge frailty. Stressors affect a frail body differently, and recovery rates vary significantly . Identifying frail and non-frail older adults in the preoperative phase can optimize patient care plans. Frail older adults face higher risks of readmission, longer hospital stays, malnutrition, functional and cognitive decline, higher complication rates, new disabilities, and increased mortality , . Postoperative delirium (POD), which is a serious neuropsychiatric disorder associated with medical, cognitive, and functional impairment, is a common complication in this population . A meta-analysis by Persico et al. found a significant association between delirium and frailty, with frail patients having a 2.2 times higher risk of developing delirium . Mortality was found to increase significantly by 11% for every 48 h of delirium in patients aged 65 and older . Validated geriatric assessments are crucial for evaluating various aspects of elderly life, identifying patients at risk, and assessing functional abilities . Standard preoperative geriatric assessments in oral and maxillofacial surgery are not well-established. Various validated geriatric assessments and screening tools are available in the literature, making it challenging to find a simple, goal-oriented collection of clinically significant assessments for daily preoperative use. Identifying assessments significantly associated with outcomes such as POD is essential for guiding effective prehabilitation plans. In view of the multifactorial nature of POD, it is important to support the implementation of non-pharmacological preventive intervention approaches. Although 30–40% of delirium cases are considered preventable , up to 72% of delirium events are not recognized or are misdiagnosed . This could be due to the lack of awareness, variation in delirium presentation, its fluctuating nature, and difficulty in assessing cognitively impaired patients. Clinicians often rely on general observation rather than structured assessments, leading to frequent misdiagnosis , . Oral and maxillofacial surgery add unique challenges compared to other surgical specialties. Postoperative communication and the detection of incoherent thinking can be limited due to intraoral and facial swelling, acute oral pain, restricted mouth movement, and tracheostomy, complicating the use of standard delirium assessment tools that rely on verbal communication. Thus, postoperative screening solely without baseline preoperative assessment might be insufficient. This prospective cohort study aims to identify geriatric screening tools that can aid in predicting preoperative delirium and explore the high-risk group of elderly patients undergoing various oral maxillofacial surgical procedures.
Study design This prospective observational cohort study was conducted from August 2022 through August 2023 at the department of Oral, Maxillofacial, and Plastic Facial Surgery at Duesseldorf University Hospital in Germany. The study protocol was approved by the Ethics Committee of Heinrich Heine University in Germany (approval number: 2022 − 1810) and was conducted following the Declaration of Helsinki. The study is reported according to the criteria in the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist. Participants Patients were included if they were above 70 years old, planned for elective or emergency maxillofacial surgical procedures as inpatients in general anesthesia, and agreed to participate in the assessment pre- and postoperatively. Patients were excluded if they had severe dementia, were unable to participate in the assessment pre- or postoperatively, if the operation was planned as an outpatient procedure or was canceled, had Incomplete information that couldn’t be statistically evaluated, and were unwilling to cooperate with the research. Variables and study setting The researchers performed a preoperative assessment following the study protocol. Table presents the screening tools used in the study. During the preoperative period, several screening tools were used to assess different variables, including patients’ functional abilities, cognition, nutritional status, mobility and strength, emotions, hearing impairment, sleep disruptions, comorbidities, and delirium risk status. POD was screened daily in the wards for patients enrolled in the study using three delirium screening tools. The intensive care unit (ICU) team assessed POD in the ICU. The POD evaluation period was seven days postoperatively or until the patient was discharged. All patients’ medical records were reviewed to collect patient demographics and relevant data from the anesthesia protocol, laboratory tests, and operation records. Statistical analysis The collected data were recorded using Excel. Statistical analyses were performed with (version 2.2, The Jamovi Project, 2021 and Version 4.0, R Core Team, 2021) statistics programs. Statistical significance was set to a p-value of < 0.05 in all analyses. Exploratory data analysis and descriptive statistics were utilized for descriptive evaluation and to investigate the characteristics of the data. The Chi-square test and independent samples t-test were used to determine the association between different variables and the delirium rate in our dataset. For screening tools assessed at different time points, the Wilcoxon Signed-Rank Test was used. Binomial logistic regression was employed to evaluate the relationship between the presence or absence of POD and other independent variables. Other statistical tests, such as Welch’s t-test, were used when comparing two groups with unequal variances, and the Mann-Whitney U Test was utilized for comparing independent groups with non-normally distributed variables.
This prospective observational cohort study was conducted from August 2022 through August 2023 at the department of Oral, Maxillofacial, and Plastic Facial Surgery at Duesseldorf University Hospital in Germany. The study protocol was approved by the Ethics Committee of Heinrich Heine University in Germany (approval number: 2022 − 1810) and was conducted following the Declaration of Helsinki. The study is reported according to the criteria in the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist.
Patients were included if they were above 70 years old, planned for elective or emergency maxillofacial surgical procedures as inpatients in general anesthesia, and agreed to participate in the assessment pre- and postoperatively. Patients were excluded if they had severe dementia, were unable to participate in the assessment pre- or postoperatively, if the operation was planned as an outpatient procedure or was canceled, had Incomplete information that couldn’t be statistically evaluated, and were unwilling to cooperate with the research.
The researchers performed a preoperative assessment following the study protocol. Table presents the screening tools used in the study. During the preoperative period, several screening tools were used to assess different variables, including patients’ functional abilities, cognition, nutritional status, mobility and strength, emotions, hearing impairment, sleep disruptions, comorbidities, and delirium risk status. POD was screened daily in the wards for patients enrolled in the study using three delirium screening tools. The intensive care unit (ICU) team assessed POD in the ICU. The POD evaluation period was seven days postoperatively or until the patient was discharged. All patients’ medical records were reviewed to collect patient demographics and relevant data from the anesthesia protocol, laboratory tests, and operation records.
The collected data were recorded using Excel. Statistical analyses were performed with (version 2.2, The Jamovi Project, 2021 and Version 4.0, R Core Team, 2021) statistics programs. Statistical significance was set to a p-value of < 0.05 in all analyses. Exploratory data analysis and descriptive statistics were utilized for descriptive evaluation and to investigate the characteristics of the data. The Chi-square test and independent samples t-test were used to determine the association between different variables and the delirium rate in our dataset. For screening tools assessed at different time points, the Wilcoxon Signed-Rank Test was used. Binomial logistic regression was employed to evaluate the relationship between the presence or absence of POD and other independent variables. Other statistical tests, such as Welch’s t-test, were used when comparing two groups with unequal variances, and the Mann-Whitney U Test was utilized for comparing independent groups with non-normally distributed variables.
The present cohort consists of 90 patients who underwent a Maxillofacial surgical procedure under general anesthesia during the data collection period and agreed to participate in this prospective study. Data from 43 women and 47 men, with an average age of 79.0 ± 5.7 years, were analyzed. An overview of demographic data is presented in the following Table (Table ): Multiple screening tools and scores were evaluated during the preoperative assessment. These tools are categorized in Table according to their assessment domains, which include comorbidity status, delirium risk status, functional status, cognitive status, nutritional status, mobility and strength status, emotional status, hearing impairment, and sleeping disruption. Their association with the occurrence of POD was assessed and statistically evaluated. The postoperative delirium rate of this cohort is 8.9% ( n = 8). The summarized results of the association of comprehensive geriatric assessment instruments with POD are presented in Table . The Comorbidity-Polypharmacy Score (CPS) quantitatively measures the comorbidity’s severity and can be used as an initial assessment. In this study, POD did not occur in the two highest-risk CPS categories, and the statistical association between POD occurrence and CPS risk level was not significant, with χ²(3) = 4.55, p = 0.371, and Cramer’s V = 0.225. The risk of delirium was assessed using the DRAT and the ACB score. Although higher rates of POD were observed in participants with elevated scores, neither tool demonstrated a significant association between POD occurrence and higher delirium risk categories (χ²(1) = 0.651, p = 0.480, Cramer’s V = 0.085 for DRAT; χ²(1) = 0.0566, p = 1.000, Cramer’s V = 0.0252 for ACB). Additionally, evaluations of frailty and sarcopenia risk using the SARC-F and CFS also showed no significant statistical relationship with POD occurrence. As part of the comprehensive geriatric assessment, the functional status and basic activities of daily living were assessed using the Katz Index and IADL score. The mobility status was assessed and documented with DEMMI score. The Katz Index shows average values of 6 ± 1.1 ( n = 90) at admission and 6 ± 1.14 ( n = 83) at discharge. The comparison of both values showed no significant difference z = 10.0, p = 0.072, r = 0.224. For the IADL score, the values at admission were 8.0 ± 2.04 for women and 5.00 ± 1.30 for men. The analysis of the DEMMI reveals values of 92.5 ± 27.4 ( n = 88) at admission and 74 ± 26.8 ( n = 81) at discharge. Comparing the values at admission and discharge shows a significant decline in the DEMMI values (z = 29.0, p = 0.021, r = 0.265) (Supplemental Fig. 1). A binomial logistic regression was performed to demonstrate the effect of potential mobility impairment on the POD rate. The binomial logistic regression model is significant with χ²(5) = 15.0, p = 0.010, and Nagelkerke’s R² = 0.419. The model’s accuracy is 93.6%, with a specificity of 98.6% and a sensitivity of 33.3%. Among the variables examined, the preoperative DEMMI ( p = 0.006) and the DEMMI at discharge ( p = 0.007) are significant. With an OR = 1.176 (95% CI [1.047, 1.322]), the DEMMI at admission is a positive predictor, and with an OR = 0.813 (95% CI [0.701, 0.944]), the DEMMI at discharge is a negative predictor. All model coefficients and odds are listed in the following table (Supplemental Table 1). The MNA-SF score assessment tool was utilized to assess the preoperative nutritional status. Although a higher rate of POD was observed in the group of patients at risk of malnutrition and those who were malnourished, a significant relationship could not be established, with χ²(2) = 4.69, p = 0.139, and Cramer’s V = 0.228. To further assess nutritional status, skinfold thickness measurements over the triceps muscle were performed. The results indicated that women had a mean skinfold thickness of 12.0 ± 7.21 mm, while men had a mean value of 14.0 ± 6.49 mm. Additionally, grip strength testing was conducted, revealing mean values of 22.0 ± 6.50 kg for women and 34.0 ± 10.6 kg for men. Two tools were used to assess cognitive function. The six-item screener did not show a statistically significant result (χ²(1) = 3.42, p = 0.123, Cramer’s V = 0.195). In contrast, with the clock-drawing test, a significant relationship was evident (χ²(1) = 14.4, p = 0.005, Cramer’s V = 0.402), indicating that CDT scores are moderately predictive of delirium occurrence. Different screening tools were used to assess sleep disorders; none of the tests revealed a significant relationship to POD occurrence. Only patients with an increased risk of obstructive sleep apnea showed a higher rate of POD occurrence. In this study, patients underwent a wide range of surgeries that differed in complexity, duration, and stress levels. When comparing various types of surgeries, tumor patients, as the most vulnerable group (major surgery), exhibited a delirium rate of 31.6% ( n = 6) (Supplemental Table 2). The Chi-Square test revealed a statistically significant association between operation type and POD occurrence (χ²(7) = 16.94, p = 0.018, Cramer’s V = 0.434). The patients who underwent major surgery ( n = 19) were analyzed separately to examine the relationship between POD occurrence and all the screening tools and scores used; none showed a significant relationship. A statistically significant difference was observed in the operation duration between patients with and without delirium. The operation duration for patients without delirium was 229 min shorter (95% CI [36, 422]), t(7.34) = 2.78, p = 0.026, Cohen’s d = 1.26 (Welch’s t-Test). A Mann-Whitney U test was conducted to determine if there was a difference in the length of hospital stay between patients with and without delirium. The distributions of the two groups differed significantly, Kolmogorov-Smirnov p < 0.001. There was a significant difference in the length of hospital stay in days between patients with and without delirium, (U = 105, p = 0.002, r = 0.677). As hemoglobin (Hb) level is considered one of the risk factors for delirium, we examined it in the pre- and postoperative phases. The preoperative Hb levels had no significant relationship with POD as indicated by t(84.0)=-0.910, p = 0.365, Cohen’s d=-0.338. However, the difference in Hb levels in the postoperative phase was significant, with patients experiencing POD having Hb levels 1.881 mg/dL (95%-CI[3.55, 0.215]) lower than those without POD, shown by t(75.0)=-2.249, p = 0.027, Cohen’s d=-0.892. Postoperative ICU stay and the presence of a tracheostomy are known risk factors for POD. The analysis revealed higher rates of POD in patients with a postoperative ICU stay and those with a tracheostomy. However, these results were not statistically significant. For tracheostomy cases, delirium occurred in 20% of cases compared to 10% in non-tracheostomy cases(χ²(1) = 0.172, P = 0.678, Cramer’s V = 0.044). For ICU stay cases, delirium occurred in 33.33% of cases compared to 10.26% in non-ICU cases, also showing no significant association in this cohort (χ²(1) = 3.004, P = 0.083, Cramer’s V = 0.183). In this study, three different delirium screening tools, 4AT, NuDESC, and CAM, were used in the postoperative phase. The results showed no discrepancies, as each tool consistently indicated either the presence or absence of POD. This uniformity highlights the reliability and potential interchangeability of these screening methods in detecting delirium within the studied cohort.
Importance and challenges of comprehensive geriatric assessment Comprehensive geriatric assessment is necessary in light of the increasing elderly population and the complexity of surgeries performed. The current treatment modalities for patients with multiple comorbidities are fragmented and poorly coordinated. Holistic, patient-centered management approaches, which consider all factors influencing the patient’s condition and treatment outcome, are recommended . Despite its importance, there is no consensus on geriatric assessment in surgical specialties . Each specialty has unique characteristics and postoperative limitations. Integrating a full geriatric assessment into the daily routine of a busy surgical team is challenging, highlighting the need for objective, goal-oriented instruments. A noteworthy approach was implemented in the EASE ( Elder-Friendly Approaches to the Surgical Environment ) initiative to create an evidence-based, elder-friendly surgical environment by incorporating geriatric assessments for patients undergoing emergency surgeries in general surgery departments. This resulted in many positive outcomes, including a 19% reduction in mortality (51 of 153 [33.3%] vs. 19 of 140 [13.6%]; P < 0.001), as well as reduced complications, length of stay, and discharge to care facilities . These positive results are very promising and encouraging, suggesting that similar practices should be adopted in elective settings. POD in OMFS POD is a serious condition that delays recovery, leads to complications, and increases hospital length of stay. Early recognition and management are essential to reduce these adverse outcomes, particularly in patients undergoing major surgical procedures. The incidence rate of POD in our cohort was 8.9%, rising to 31.6% in patients undergoing major surgery, consistent with previously published data , . Major operations with extended durations are known risk factors for POD in OMFS. Previous studies reported an increased risk with operation durations of 6–10 h , , and one study noted that each additional 10 min of surgery increases the odds of POD by 3.2% . Kinoshita et al. noted that POD could lead to OMFS-relevant complications such as flap necrosis , which not only increases the length of stay but also increases exposure to multiple surgical procedures under general anesthesia. This may lead to further complications and delay additional adjunct treatment in patients requiring chemotherapy and radiotherapy. Training medical teams to recognize and manage POD, particularly the hypoactive type, is crucial. Some studies have discussed that the presence of a tracheostomy in the postoperative period can increase the possibility of overlooking or misdiagnosing hypoactive delirium due to challenged communication , . In our cohort, this could not be elaborately reported due to the limited number of patients with tracheostomies. Future studies focusing on this subgroup may help identify the proper management. Previous prospective studies related to POD failed to report a cognitive baseline assessment. In our study, the CDT was a significant predictor of POD, underscoring its utility in cognitive preoperative evaluations. This finding aligns with previous research by Goldstein et al., which found a strong association between cognitive impairment and postoperative delirium . Routine use of cognitive screening tools such as the CDT in preoperative assessments could enhance the early identification of at-risk patients. The association between the change in hemoglobin level post-surgery in patients who developed POD aligns with findings by Makiguchi et al., suggesting that monitoring and managing hemoglobin levels could be crucial in preventing POD . Given the association between obstructive sleep apnea (OSA) and POD and the possibility of modifying or early intervention in some cases , incorporating OSA screening into preoperative assessments using an instrument like STOP-BANG could be beneficial, as higher-risk groups showed an elevated POD rate in our cohort. Although Insomnia has been reported as a significant risk factor in the OMFS literature , this association couldn’t be established in our cohort using the Insomnia Severity Index. Comprehensive and individualized approaches A comprehensive approach that sets individualized goals for each patient is essential. Proper interventions in terms of the choice of surgical procedure and monitoring intraoperative factors such as operation duration should be applied . Efficiently directing resources to high-risk groups identified through comprehensive geriatric assessments could improve patient outcomes and reduce healthcare costs. Future research should continue refining these strategies and exploring additional factors contributing to POD. Study strengths and limitations Our cohort study is the first, to our knowledge, to examine a comprehensive geriatric assessment, including the multifactorial aspects of POD, in the field of oral and maxillofacial surgery. Besides being prospective, the strengths of our study include using multiple instruments in parallel to determine which is most clinically relevant, time-efficient, and practical for application without extensive training. Another strength that many previous studies have lacked is the inclusion of baseline cognitive assessments, which are highly relevant to delirium diagnosis. Despite the relatively adequate number of patients included, we acknowledge that the sample size is still insufficient. One of the main difficulties encountered during the recruitment phase was that patients felt overwhelmed by the number of examinations required, in addition to ongoing medical diagnostics and examinations. Integrating the most relevant tests that carry significant clinical outcomes into routine examinations during all preoperative visits could easily address this issue. Another limitation of our study is the low number of delirium cases diagnosed compared to the many variables tested, which introduces a risk of over-fitting. Future studies should focus on comprehensive but goal-specific instruments that objectively document clinical findings and can be reproduced by each member of the team. It is also crucial to utilize resources by focusing on high-risk groups, particularly those undergoing longer and major operations.
Comprehensive geriatric assessment is necessary in light of the increasing elderly population and the complexity of surgeries performed. The current treatment modalities for patients with multiple comorbidities are fragmented and poorly coordinated. Holistic, patient-centered management approaches, which consider all factors influencing the patient’s condition and treatment outcome, are recommended . Despite its importance, there is no consensus on geriatric assessment in surgical specialties . Each specialty has unique characteristics and postoperative limitations. Integrating a full geriatric assessment into the daily routine of a busy surgical team is challenging, highlighting the need for objective, goal-oriented instruments. A noteworthy approach was implemented in the EASE ( Elder-Friendly Approaches to the Surgical Environment ) initiative to create an evidence-based, elder-friendly surgical environment by incorporating geriatric assessments for patients undergoing emergency surgeries in general surgery departments. This resulted in many positive outcomes, including a 19% reduction in mortality (51 of 153 [33.3%] vs. 19 of 140 [13.6%]; P < 0.001), as well as reduced complications, length of stay, and discharge to care facilities . These positive results are very promising and encouraging, suggesting that similar practices should be adopted in elective settings.
POD is a serious condition that delays recovery, leads to complications, and increases hospital length of stay. Early recognition and management are essential to reduce these adverse outcomes, particularly in patients undergoing major surgical procedures. The incidence rate of POD in our cohort was 8.9%, rising to 31.6% in patients undergoing major surgery, consistent with previously published data , . Major operations with extended durations are known risk factors for POD in OMFS. Previous studies reported an increased risk with operation durations of 6–10 h , , and one study noted that each additional 10 min of surgery increases the odds of POD by 3.2% . Kinoshita et al. noted that POD could lead to OMFS-relevant complications such as flap necrosis , which not only increases the length of stay but also increases exposure to multiple surgical procedures under general anesthesia. This may lead to further complications and delay additional adjunct treatment in patients requiring chemotherapy and radiotherapy. Training medical teams to recognize and manage POD, particularly the hypoactive type, is crucial. Some studies have discussed that the presence of a tracheostomy in the postoperative period can increase the possibility of overlooking or misdiagnosing hypoactive delirium due to challenged communication , . In our cohort, this could not be elaborately reported due to the limited number of patients with tracheostomies. Future studies focusing on this subgroup may help identify the proper management. Previous prospective studies related to POD failed to report a cognitive baseline assessment. In our study, the CDT was a significant predictor of POD, underscoring its utility in cognitive preoperative evaluations. This finding aligns with previous research by Goldstein et al., which found a strong association between cognitive impairment and postoperative delirium . Routine use of cognitive screening tools such as the CDT in preoperative assessments could enhance the early identification of at-risk patients. The association between the change in hemoglobin level post-surgery in patients who developed POD aligns with findings by Makiguchi et al., suggesting that monitoring and managing hemoglobin levels could be crucial in preventing POD . Given the association between obstructive sleep apnea (OSA) and POD and the possibility of modifying or early intervention in some cases , incorporating OSA screening into preoperative assessments using an instrument like STOP-BANG could be beneficial, as higher-risk groups showed an elevated POD rate in our cohort. Although Insomnia has been reported as a significant risk factor in the OMFS literature , this association couldn’t be established in our cohort using the Insomnia Severity Index.
A comprehensive approach that sets individualized goals for each patient is essential. Proper interventions in terms of the choice of surgical procedure and monitoring intraoperative factors such as operation duration should be applied . Efficiently directing resources to high-risk groups identified through comprehensive geriatric assessments could improve patient outcomes and reduce healthcare costs. Future research should continue refining these strategies and exploring additional factors contributing to POD.
Our cohort study is the first, to our knowledge, to examine a comprehensive geriatric assessment, including the multifactorial aspects of POD, in the field of oral and maxillofacial surgery. Besides being prospective, the strengths of our study include using multiple instruments in parallel to determine which is most clinically relevant, time-efficient, and practical for application without extensive training. Another strength that many previous studies have lacked is the inclusion of baseline cognitive assessments, which are highly relevant to delirium diagnosis. Despite the relatively adequate number of patients included, we acknowledge that the sample size is still insufficient. One of the main difficulties encountered during the recruitment phase was that patients felt overwhelmed by the number of examinations required, in addition to ongoing medical diagnostics and examinations. Integrating the most relevant tests that carry significant clinical outcomes into routine examinations during all preoperative visits could easily address this issue. Another limitation of our study is the low number of delirium cases diagnosed compared to the many variables tested, which introduces a risk of over-fitting. Future studies should focus on comprehensive but goal-specific instruments that objectively document clinical findings and can be reproduced by each member of the team. It is also crucial to utilize resources by focusing on high-risk groups, particularly those undergoing longer and major operations.
Our findings identified multiple geriatric assessment instruments relevant to OMFS that can be easily assessed in the preoperative phase. A suggested comprehensive assessment model, which meets the diversity of POD factors and includes the tests that showed clinical relevance to POD, is presented in Fig. . POD has multiple predisposing factors that are not always modifiable and many precipitating factors that could be detected and prevented. Identifying high-risk groups and educating them and their families is key to a prehabilitation plan. Additionally, educating and training the medical team about the seriousness of POD development and its short- and long-term complications is crucial. Focusing on early detection through regular screening, applying non-pharmacological measures, and using pharmacological interventions when necessary are the first steps to developing a specialty-specific package of measures that can significantly improve patient care and outcomes.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
|
Using Lean Six Sigma techniques to improve efficiency in outpatient ophthalmology clinics | b73edea6-6b8c-4c5f-acae-47be3f7d3d51 | 7792026 | Ophthalmology[mh] | Medical services worldwide face an aging population and with it, an increasing burden of disease . Continuous improvement in diagnosis and management is resulting in better patient outcomes, but also increasing demands on healthcare resources. Together, increasing patient numbers, increasing complexity of patient assessment and management, and limitations on healthcare resources have resulted in prolonged patient wait times, decreased quality of service, and decreased patient satisfaction in many outpatient services across many medical specialities in both developed and developing nations . With a focus on improving workflows, process efficiency, and reducing variability in production/service delivery, Lean and Six Sigma are two well-known management methodologies from manufacturing that may be used to help address these growing issues in outpatient healthcare settings . Lean, derived from the Toyota Production System, is a process improvement methodology focused on reducing ‘waste’ (steps that do not add value to the final service/product) to improve efficiency. ‘Waste’ is typically considered in 7 categories being: waiting, unnecessary transport, unnecessary human motion, inventory, over-processing, rework, and overproduction . Examples of ‘waste’ in outpatient clinics include patients waiting (inventory), inappropriate testing (overproduction), or idle staff (waiting). As the patient journey through an outpatient clinic is similar to a production process, with creation of relative value units through multiple steps e.g. patient check-in, initial nursing/allied health evaluation, ophthalmologist examination, and check-out, Lean techniques may be adapted to optimise patient flow and reduce ‘waste’ . Six Sigma, originally developed by Motorola in 1986, is a structured methodology to identify and eliminate defects, and reduce variation in production processes. The methodology consists of five steps . Define, where issues in a process are defined from business and customer perspectives; Measure, where the process is broken down and explored; Analyse, where data is analysed to identify underlying root causes of issues; Improve, where solutions are developed, piloted and implemented to address root causes; and Control, where solutions are sustained through process control plans and ongoing monitoring. Outpatient clinics often have a high degree of variability contributing to clinic inefficiency e.g. different pathologies, differing clinician preferences etc. Six Sigma focusses on minimising variability where possible to streamline processes. Due to their overlap, Lean and Six Sigma are often combined in a “Lean Six Sigma” approach. In recent times, Lean Six Sigma has been increasingly applied in healthcare . There are few studies, however, examining its efficacy in improving publicly-funded, outpatient ophthalmology services . This project studied the effect of applying Lean Six Sigma in a publicly-funded tertiary referral outpatient ophthalmology service to reduce duration and variability of patient in-clinic times and improve service efficiency.
Practice setting Royal North Shore Hospital Eye Clinic is a publicly-funded multi-subspecialty outpatient ophthalmology service in Sydney, Australia. Over 8,000 appointments are seen every year across 6 subspecialties, with referrals received from primary care and specialist doctors, optometrists and general ophthalmologists. The clinic also provides ‘on-call’ ophthalmic care to inpatients of Royal North Shore Hospital (> 600 beds) and patients presenting to emergency departments across the Northern Sydney Local Health District (> 185,000 presentations/year). The clinic runs nine half-day sessions (240 minutes) every week. It is staffed by a roster of eight consultant subspecialist ophthalmologists (one on the floor for each subspecialist session and one always ‘on-call’), three ophthalmology registrars (two for all sessions, one of which is ‘on-call’ for emergency and inpatient consults), six nurses (two for all sessions) and one orthoptist (for all sessions). In any session, patients are evaluated in a multi-step process including check-in, screening (nursing/orthoptic staff assessment), investigations, ophthalmologist review and check-out. Between each step, if patients are not passed directly onto the next staff member immediately, they are returned to the waiting area or sat outside the next applicable room in the patient journey (e.g. outside the investigation room or the ophthalmologist’s room). Within the clinic there are three rooms for screening, three rooms for ophthalmologist review, two rooms for investigations and two rooms for procedures. When a session is in progress, all rooms are dedicated to that session alone. In general, patients are booked into planned appointment slots within a session. When emergency or inpatient consults are requested however, they may be fit in on an ad hoc basis depending on clinical urgency. Key measures (“Define” phase) This study’s outcome measures were: duration (median) and variability (interquartile range) of patient in-clinic time, and number of patients seen per session pre- and post-implementation. Patient in-clinic time was defined as the number of minutes from whichever was later of the appointment time, or the patient check-in time, until patient check-out. This was done to reduce the effect that patients arriving early (in which case appointment time was used) or late (in which case check-in time was used) to their appointments had on variability of in-clinic time. Data collection Cerner Scheduling Appointment Book (Cerner, North Kansas City, USA), was used to schedule patient appointments. This program allowed creation of a timetable with specific appointment times and types (e.g. new, follow-up, emergency etc.) for patients to be booked into. When patients attended appointments, it recorded the time patients were checked-in and checked-out by administrative staff. Waiting time before check-in or after check-out (e.g. waiting for transport) was not captured. Two five-month data audits of all attended appointments were conducted to determine the efficacy of the Lean Six Sigma process. A baseline audit (“Measure” and “Analyse” phases) was retrospectively conducted from February 1st to June 30th 2018. A post-implementation audit (“Control” phase) was conducted from February 1st to June 30th 2019. Data analysis Patient age, gender, appointment time, appointment type, check-in time and check-out time were captured. Appointments with incomplete time data or coding errors (i.e. visits with no end time or total duration of 0 or greater than 480 minutes) were included in the count of patients seen but excluded from analysis of duration and variability of patient in-clinic time. Difference in duration of patient in-clinic times pre- and post-implementation was assessed using Mann-Whitney-U test on SPSS (v24, IBM Corporation, Armonk, USA). Difference in variability of patient in-clinic times was assessed using Brown-Forsythe test on Excel (Microsoft, Redmond, USA) . Difference in number of patients seen per session was assessed using Student’s test (SPSS). Differences in the proportions of patient appointment types seen were assessed using chi-squared tests, with Z-tests (with Bonferonni correction) used to compare pairwise differences between pre- and post-implementation proportions of appointment type (Excel). Difference in mean ages of patients with valid versus invalid in-clinic time data was assessed using Student’s t-test while differences in proportions of gender were assessed using chi-squared test (SPSS). Process flow maps and time-motion analysis Two patient process flows fit most patient journeys through the clinic; one where investigations were performed, and one without investigations. Process flow maps outlining steps in these journeys were created (Fig. ). A two-week time-in-motion study was conducted from June 11th to June 24th 2018 to determine proportions of total in-clinic time spent in each step along the patient journey. In this time-in-motion study, staff members noted the times they commenced and ended their roles in the patient journey on a dedicated audit document. Time between each staff member’s contact time was treated as waiting time. The time-in-motion study data was analysed in Excel. Visits with coding errors (i.e. no time entered, times with inconsistent patient flow) were excluded. Proportions of total in-clinic time were determined and superimposed on patient process flow maps to identify bottlenecks in the patient journey (Fig. ). Root cause analysis Staff interviews, workshops, and review of patient complaint data were used to identify issues causing prolonged duration and increased variability of patient in-clinic time and clinic inefficiency. Following this, root cause analysis of issues was undertaken using the “Five Whys Technique” . Resulting root causes were grouped and the most common root causes targeted for solution development.
Royal North Shore Hospital Eye Clinic is a publicly-funded multi-subspecialty outpatient ophthalmology service in Sydney, Australia. Over 8,000 appointments are seen every year across 6 subspecialties, with referrals received from primary care and specialist doctors, optometrists and general ophthalmologists. The clinic also provides ‘on-call’ ophthalmic care to inpatients of Royal North Shore Hospital (> 600 beds) and patients presenting to emergency departments across the Northern Sydney Local Health District (> 185,000 presentations/year). The clinic runs nine half-day sessions (240 minutes) every week. It is staffed by a roster of eight consultant subspecialist ophthalmologists (one on the floor for each subspecialist session and one always ‘on-call’), three ophthalmology registrars (two for all sessions, one of which is ‘on-call’ for emergency and inpatient consults), six nurses (two for all sessions) and one orthoptist (for all sessions). In any session, patients are evaluated in a multi-step process including check-in, screening (nursing/orthoptic staff assessment), investigations, ophthalmologist review and check-out. Between each step, if patients are not passed directly onto the next staff member immediately, they are returned to the waiting area or sat outside the next applicable room in the patient journey (e.g. outside the investigation room or the ophthalmologist’s room). Within the clinic there are three rooms for screening, three rooms for ophthalmologist review, two rooms for investigations and two rooms for procedures. When a session is in progress, all rooms are dedicated to that session alone. In general, patients are booked into planned appointment slots within a session. When emergency or inpatient consults are requested however, they may be fit in on an ad hoc basis depending on clinical urgency.
This study’s outcome measures were: duration (median) and variability (interquartile range) of patient in-clinic time, and number of patients seen per session pre- and post-implementation. Patient in-clinic time was defined as the number of minutes from whichever was later of the appointment time, or the patient check-in time, until patient check-out. This was done to reduce the effect that patients arriving early (in which case appointment time was used) or late (in which case check-in time was used) to their appointments had on variability of in-clinic time.
Cerner Scheduling Appointment Book (Cerner, North Kansas City, USA), was used to schedule patient appointments. This program allowed creation of a timetable with specific appointment times and types (e.g. new, follow-up, emergency etc.) for patients to be booked into. When patients attended appointments, it recorded the time patients were checked-in and checked-out by administrative staff. Waiting time before check-in or after check-out (e.g. waiting for transport) was not captured. Two five-month data audits of all attended appointments were conducted to determine the efficacy of the Lean Six Sigma process. A baseline audit (“Measure” and “Analyse” phases) was retrospectively conducted from February 1st to June 30th 2018. A post-implementation audit (“Control” phase) was conducted from February 1st to June 30th 2019.
Patient age, gender, appointment time, appointment type, check-in time and check-out time were captured. Appointments with incomplete time data or coding errors (i.e. visits with no end time or total duration of 0 or greater than 480 minutes) were included in the count of patients seen but excluded from analysis of duration and variability of patient in-clinic time. Difference in duration of patient in-clinic times pre- and post-implementation was assessed using Mann-Whitney-U test on SPSS (v24, IBM Corporation, Armonk, USA). Difference in variability of patient in-clinic times was assessed using Brown-Forsythe test on Excel (Microsoft, Redmond, USA) . Difference in number of patients seen per session was assessed using Student’s test (SPSS). Differences in the proportions of patient appointment types seen were assessed using chi-squared tests, with Z-tests (with Bonferonni correction) used to compare pairwise differences between pre- and post-implementation proportions of appointment type (Excel). Difference in mean ages of patients with valid versus invalid in-clinic time data was assessed using Student’s t-test while differences in proportions of gender were assessed using chi-squared test (SPSS).
Two patient process flows fit most patient journeys through the clinic; one where investigations were performed, and one without investigations. Process flow maps outlining steps in these journeys were created (Fig. ). A two-week time-in-motion study was conducted from June 11th to June 24th 2018 to determine proportions of total in-clinic time spent in each step along the patient journey. In this time-in-motion study, staff members noted the times they commenced and ended their roles in the patient journey on a dedicated audit document. Time between each staff member’s contact time was treated as waiting time. The time-in-motion study data was analysed in Excel. Visits with coding errors (i.e. no time entered, times with inconsistent patient flow) were excluded. Proportions of total in-clinic time were determined and superimposed on patient process flow maps to identify bottlenecks in the patient journey (Fig. ).
Staff interviews, workshops, and review of patient complaint data were used to identify issues causing prolonged duration and increased variability of patient in-clinic time and clinic inefficiency. Following this, root cause analysis of issues was undertaken using the “Five Whys Technique” . Resulting root causes were grouped and the most common root causes targeted for solution development.
Baseline audit (“Measure” and “Analyse” phases) During the baseline audit period there were 3624 visits over 187 240-minute sessions (average 19.3 patients/session). Of these visits, 2241 had valid time data for analysis. Median patient in-clinic time was 131 minutes and the interquartile range 133 minutes (84–217, quartile 1- quartile 3). Of visits with invalid data, 13 had invalid in-clinic times (due to patients arriving, being seen and discharged before their appointment time), while the remaining 1370 had invalid check-out times (checked-out the following day). Comparing invalid to valid data cohorts, there were no significant differences in age (invalid: 58.4 ± 23.2 years; valid 58.0 ± 23.4 years, p = 0.568) or gender (invalid: female 49.2%; valid: female 48.8%, p = 0.743), and only minimal differences in proportions of appointment types (Table ). There were 329 visits during the two-week time-in-motion study. Of these, 195 had valid data for analysis. Two bottlenecks within the clinic were identified. The first, between patient check-in and screening, accounted for 33–39% of total in-clinic time depending on the care pathway. The second, before seeing the ophthalmologist, accounted for 35% of total in-clinic time. Overall, over 70% of patient in-clinic time was spent waiting in both care pathways (Fig. ). Through ten patient interviews, ten staff interviews, two staff workshops (including all staff working in the clinic), and an audit of patient complaint data, 100 unique issues causing prolonged patient in-clinic time and clinic inefficiencies were identified. Ten common root causes emerged from root cause analysis, with four contributing to 77% of issues encountered (Fig. ). Scheduling was the most commonly occurring root cause identified in root cause analysis (32% of identified issues). Therefore, further exploration of scheduling data was undertaken. As seen in Fig. a, most patients were scheduled to arrive in the middle of clinics. This was due to the clinic schedule design, and ad hoc addition of inpatient and emergency patients into already fully-booked sessions through the clinic’s ‘on-call’ service. Patient influxes at these times were the primary contributor to the bottleneck at the start of the care pathway between check-in and screening. Process improvements (“Improve” phase) Four main root causes: scheduling, staffing, patient communication, and clinic processes, were responsible for 77% of issues encountered (Fig. ). Although funding was not available to address staffing, several other targeted negligible cost interventions were implemented to address the remaining three main root causes. To address poor patient scheduling, the clinic schedule was revised to control patients’ arrival times. This involved: moving the start time of screening staff and patient appointments to 7:30am so patients could be screened and ready to see the ophthalmologist at 8am; revising appointment slot time lengths to better align with the needs of each appointment type; creating dedicated ‘on call’ emergency and inpatient appointment placeholders to reduce ad hoc scheduling of these patients; and providing the ‘on-call’ registrar with a ‘live’ scheduling app to allow easier identification of available appointment slots for ad hoc bookings. Furthermore, a dedicated postoperative clinic was introduced for 1-week and 4-week postoperative follow up visits as these had low variation care pathways amenable to optimisation through grouping into a dedicated clinic. The impact of these solutions is shown in Fig. b. To address inefficient clinical processes, further staff feedback was sought on potential solutions and the following three solutions developed: Medications Initially, many frequently used medications (e.g. valacyclovir, timolol, brinzolamide, preservative free lubricants) were often not readily available in-clinic. This disrupted patient flow, requiring clinicians to call the hospital pharmacy to request the medications and patients to wait for them to be delivered. To address this, imprest medication lists were reviewed and updated to include these medications. Daily checks were implemented to ensure that adequate supplies of medications were available in-clinic. Triage Initially, there was no standard order to see patients in after check-in, with different staff using different approaches. There was no prioritisation system for patients with higher clinical need, e.g. inpatients, unwell persons, and no clear instruction for paper files of newly checked-in patients to be put in appointment order in the clinic’s ‘patients to be seen’ box. As clinicians generally picked up patient files from the top of the box, patients were therefore seen out of chronological order, disrupting patient flow and increasing variability in in-clinic time. To address these issues, defined escalation criteria were made for patients with clinical or other special requirements. Clear instructions were made to put paper files of newly checked-in patients in appointment order in the ‘patients to be seen’ box. Clinicians were instructed to see all patients in order of appointment time, unless there was an urgent clinical need. Investigations Initially, there was no process to clearly document investigations needed for follow up patients at their next appointment. This resulted in inefficiency as some patients occasionally needed to return to the investigation room after seeing the ophthalmologist for further tests, whilst others underwent unnecessary non-invasive investigations. To address this, a standard clinic documentation template was introduced for investigations required at the next follow up visit. This was done with the aim of prompting clinicians to consider and order appropriate investigations in advance (Online supplement: Documentation Template). Based on the root cause analysis finding that poor patient communication accounted for 16% of issues in the clinic, all written patient communications were reviewed. Referral acknowledgement letters were updated to provide more accurate information regarding wait times for an initial appointment. Clinic information sheets and posters were developed to inform patients what to expect during their clinic visit. Fact sheets for common ophthalmological conditions and surgical procedures were introduced to improve and standardise patient education, while also potentially reducing the clinician face time needed to provide this education. Consumer representatives were used to review and provide feedback on all revised patient communications. Follow up analysis (“Control” phase) During the post-implementation period there were 3853 clinic visits over 183 240-minute sessions (average 21.1 patients per session), a 9% increase in patients per session compared to the baseline period ( p < 0.016). Of these visits, 3490 had valid data for analysis. Median patient in-clinic time was 107 minutes and the interquartile range 91 minutes (71–162, quartile 1- quartile 3). This was a significant reduction in duration and variability of patient in-clinic time compared to baseline (both p < 0.001). (Fig. ). Of visits with invalid data, 11 cases had no check-out time, 71 cases had invalid waiting times (patients arriving, being seen and discharged before their appointment time), and 281 had invalid check-out times (checked-out the following day). Comparing the invalid to valid data cohort, there were no significant differences in age (invalid: 56.5 ± 23.3 years; valid 58.1 ± 22.8 years, p = 0.190), gender (invalid: female 46.5%; valid: female 43.8%, p = 0.321) or proportion of appointment types (Table ).
During the baseline audit period there were 3624 visits over 187 240-minute sessions (average 19.3 patients/session). Of these visits, 2241 had valid time data for analysis. Median patient in-clinic time was 131 minutes and the interquartile range 133 minutes (84–217, quartile 1- quartile 3). Of visits with invalid data, 13 had invalid in-clinic times (due to patients arriving, being seen and discharged before their appointment time), while the remaining 1370 had invalid check-out times (checked-out the following day). Comparing invalid to valid data cohorts, there were no significant differences in age (invalid: 58.4 ± 23.2 years; valid 58.0 ± 23.4 years, p = 0.568) or gender (invalid: female 49.2%; valid: female 48.8%, p = 0.743), and only minimal differences in proportions of appointment types (Table ). There were 329 visits during the two-week time-in-motion study. Of these, 195 had valid data for analysis. Two bottlenecks within the clinic were identified. The first, between patient check-in and screening, accounted for 33–39% of total in-clinic time depending on the care pathway. The second, before seeing the ophthalmologist, accounted for 35% of total in-clinic time. Overall, over 70% of patient in-clinic time was spent waiting in both care pathways (Fig. ). Through ten patient interviews, ten staff interviews, two staff workshops (including all staff working in the clinic), and an audit of patient complaint data, 100 unique issues causing prolonged patient in-clinic time and clinic inefficiencies were identified. Ten common root causes emerged from root cause analysis, with four contributing to 77% of issues encountered (Fig. ). Scheduling was the most commonly occurring root cause identified in root cause analysis (32% of identified issues). Therefore, further exploration of scheduling data was undertaken. As seen in Fig. a, most patients were scheduled to arrive in the middle of clinics. This was due to the clinic schedule design, and ad hoc addition of inpatient and emergency patients into already fully-booked sessions through the clinic’s ‘on-call’ service. Patient influxes at these times were the primary contributor to the bottleneck at the start of the care pathway between check-in and screening.
Four main root causes: scheduling, staffing, patient communication, and clinic processes, were responsible for 77% of issues encountered (Fig. ). Although funding was not available to address staffing, several other targeted negligible cost interventions were implemented to address the remaining three main root causes. To address poor patient scheduling, the clinic schedule was revised to control patients’ arrival times. This involved: moving the start time of screening staff and patient appointments to 7:30am so patients could be screened and ready to see the ophthalmologist at 8am; revising appointment slot time lengths to better align with the needs of each appointment type; creating dedicated ‘on call’ emergency and inpatient appointment placeholders to reduce ad hoc scheduling of these patients; and providing the ‘on-call’ registrar with a ‘live’ scheduling app to allow easier identification of available appointment slots for ad hoc bookings. Furthermore, a dedicated postoperative clinic was introduced for 1-week and 4-week postoperative follow up visits as these had low variation care pathways amenable to optimisation through grouping into a dedicated clinic. The impact of these solutions is shown in Fig. b. To address inefficient clinical processes, further staff feedback was sought on potential solutions and the following three solutions developed: Medications Initially, many frequently used medications (e.g. valacyclovir, timolol, brinzolamide, preservative free lubricants) were often not readily available in-clinic. This disrupted patient flow, requiring clinicians to call the hospital pharmacy to request the medications and patients to wait for them to be delivered. To address this, imprest medication lists were reviewed and updated to include these medications. Daily checks were implemented to ensure that adequate supplies of medications were available in-clinic. Triage Initially, there was no standard order to see patients in after check-in, with different staff using different approaches. There was no prioritisation system for patients with higher clinical need, e.g. inpatients, unwell persons, and no clear instruction for paper files of newly checked-in patients to be put in appointment order in the clinic’s ‘patients to be seen’ box. As clinicians generally picked up patient files from the top of the box, patients were therefore seen out of chronological order, disrupting patient flow and increasing variability in in-clinic time. To address these issues, defined escalation criteria were made for patients with clinical or other special requirements. Clear instructions were made to put paper files of newly checked-in patients in appointment order in the ‘patients to be seen’ box. Clinicians were instructed to see all patients in order of appointment time, unless there was an urgent clinical need. Investigations Initially, there was no process to clearly document investigations needed for follow up patients at their next appointment. This resulted in inefficiency as some patients occasionally needed to return to the investigation room after seeing the ophthalmologist for further tests, whilst others underwent unnecessary non-invasive investigations. To address this, a standard clinic documentation template was introduced for investigations required at the next follow up visit. This was done with the aim of prompting clinicians to consider and order appropriate investigations in advance (Online supplement: Documentation Template). Based on the root cause analysis finding that poor patient communication accounted for 16% of issues in the clinic, all written patient communications were reviewed. Referral acknowledgement letters were updated to provide more accurate information regarding wait times for an initial appointment. Clinic information sheets and posters were developed to inform patients what to expect during their clinic visit. Fact sheets for common ophthalmological conditions and surgical procedures were introduced to improve and standardise patient education, while also potentially reducing the clinician face time needed to provide this education. Consumer representatives were used to review and provide feedback on all revised patient communications.
Initially, many frequently used medications (e.g. valacyclovir, timolol, brinzolamide, preservative free lubricants) were often not readily available in-clinic. This disrupted patient flow, requiring clinicians to call the hospital pharmacy to request the medications and patients to wait for them to be delivered. To address this, imprest medication lists were reviewed and updated to include these medications. Daily checks were implemented to ensure that adequate supplies of medications were available in-clinic.
Initially, there was no standard order to see patients in after check-in, with different staff using different approaches. There was no prioritisation system for patients with higher clinical need, e.g. inpatients, unwell persons, and no clear instruction for paper files of newly checked-in patients to be put in appointment order in the clinic’s ‘patients to be seen’ box. As clinicians generally picked up patient files from the top of the box, patients were therefore seen out of chronological order, disrupting patient flow and increasing variability in in-clinic time. To address these issues, defined escalation criteria were made for patients with clinical or other special requirements. Clear instructions were made to put paper files of newly checked-in patients in appointment order in the ‘patients to be seen’ box. Clinicians were instructed to see all patients in order of appointment time, unless there was an urgent clinical need.
Initially, there was no process to clearly document investigations needed for follow up patients at their next appointment. This resulted in inefficiency as some patients occasionally needed to return to the investigation room after seeing the ophthalmologist for further tests, whilst others underwent unnecessary non-invasive investigations. To address this, a standard clinic documentation template was introduced for investigations required at the next follow up visit. This was done with the aim of prompting clinicians to consider and order appropriate investigations in advance (Online supplement: Documentation Template). Based on the root cause analysis finding that poor patient communication accounted for 16% of issues in the clinic, all written patient communications were reviewed. Referral acknowledgement letters were updated to provide more accurate information regarding wait times for an initial appointment. Clinic information sheets and posters were developed to inform patients what to expect during their clinic visit. Fact sheets for common ophthalmological conditions and surgical procedures were introduced to improve and standardise patient education, while also potentially reducing the clinician face time needed to provide this education. Consumer representatives were used to review and provide feedback on all revised patient communications.
During the post-implementation period there were 3853 clinic visits over 183 240-minute sessions (average 21.1 patients per session), a 9% increase in patients per session compared to the baseline period ( p < 0.016). Of these visits, 3490 had valid data for analysis. Median patient in-clinic time was 107 minutes and the interquartile range 91 minutes (71–162, quartile 1- quartile 3). This was a significant reduction in duration and variability of patient in-clinic time compared to baseline (both p < 0.001). (Fig. ). Of visits with invalid data, 11 cases had no check-out time, 71 cases had invalid waiting times (patients arriving, being seen and discharged before their appointment time), and 281 had invalid check-out times (checked-out the following day). Comparing the invalid to valid data cohort, there were no significant differences in age (invalid: 56.5 ± 23.3 years; valid 58.1 ± 22.8 years, p = 0.190), gender (invalid: female 46.5%; valid: female 43.8%, p = 0.321) or proportion of appointment types (Table ).
In this study, application of Lean Six Sigma techniques in a publicly-funded tertiary outpatient ophthalmology clinic led to development of solutions that significantly reduced duration and variability of patient in-clinic time. Median patient in-clinic time was reduced by 18% and the interquartile range by 32%. These results were achieved while patients seen per session increased 9%. Solutions used to achieve these results were: clinic schedule amendments to prevent sudden influxes of patients, a dedicated weekly postoperative patient clinic for one week and four week postoperative visits, checks to ensure frequently used medications were always available in the clinic, defining a standard order to see patients in, clear follow-up patient investigation planning documentation templates, and patient information pamphlets for common ophthalmic conditions/surgeries. Of note, these solutions were implemented without additional capital requirements (e.g. purchasing new devices) or ongoing staffing costs. This study adds to the growing body of literature demonstrating that techniques from business and industry, such as Lean Six Sigma, can be used in healthcare settings to improve system efficiency. Specific to ophthalmology, one North American group who applied Lean Six Sigma techniques to a subspecialist retina clinic (subsequently hiring an extra technician, creating a dedicated intravitreal injection patient pathway, and improving clinic scheduling), reduced mean patient visit times by 18% ( p < 0.05) and variation in visit time by 5% . A second North American group who applied Lean thinking (decentralising their optical coherence tomography machines from a central photography suite into technicians’ screening rooms), reduced patient wait times by 74% ( p < 0.0001) and in-clinic time by 36% ( p < 0.0001) . Outside of ophthalmology, Lean Six Sigma has been shown to be effective in a range of healthcare contexts. The Cleveland Clinic Cardiac Catheterisation Laboratory, as an example, applied Lean Six Sigma techniques subsequently improving patient turnover times, the number of on-time patient and physician arrival times and reducing physician down times . A further example was seen in Indiana pertaining to orthopaedic inpatient care at the Richard L. Roudebush Veterans Affairs Medical Centre in Indianapolis. Their group used Lean Six Sigma techniques to reduce length of stay of joint replacement patients by 36% from 5.3 days to 3.4 days ( p < 0.001) . Finally on a hospital-wide basis the University Hospital “Federico II” of Naples, used Lean Six Sigma techniques to reduce healthcare-associated infections in inpatients across multiple medical specialties including general medicine, pulmonology, oncology, nephrology, cardiology, neurology, gastroenterology, endocrinology and rheumatology . Process improvement methodologies such as Lean Six Sigma, present a significant opportunity to deliver better value in healthcare through improved efficiency and reduced ‘waste’. More broadly, as demands on healthcare services continue to grow across most medical specialties, a focus on service improvement will be needed to best utilise the limited resources available. This is particularly true within publicly-funded healthcare systems where long waiting times for non-emergency services are an increasingly common feature . Service improvement, particularly in organisations utilising Lean Six Sigma methodology must incorporate the feedback of all their people including patients and the multidisciplinary healthcare team. Input from the entire team not only allows for better issue identification and solution generation, but also has the potential to increase team cohesiveness and motivation to actively participate in service improvement . In this study, broad staff engagement through interviews and workshops allowed a comprehensive diagnosis of issues facing the Eye Clinic, identification of suitable, low/negligible cost solutions, and motivated all staff, from check-in desk to ophthalmologists to contribute to the service improvement effort. Going forward, we believe it has helped facilitate the development of a continuous improvement culture not only in the Eye Clinic, but also more broadly in our organisation, with the lessons learnt in this study now being applied to other outpatient clinics at our hospital. This study has several limitations. Firstly, only qualitative data (i.e. staff interviews) was used to determine inefficient clinic processes. A quantitative investigation defining exact contributions of these issues to pre- and post-implementation in-clinic times would have better clarified the efficacy of each solution. Secondly, this study did not formally measure the effect of our solutions on patient and staff satisfaction. Staff interviews suggest however, staff satisfaction and engagement in improving clinic efficiency has improved. Other studies in outpatient clinics have demonstrated that reduced patient wait times improve patient satisfaction . Thirdly, as the baseline audit was performed retrospectively, many patient visits had invalid data and were excluded from in-clinic time analysis (1383 of 3624 visits). This was noted in the improvement process and the check-out process was subsequently standardised, resulting in less invalid data in the post-implementation audit (363 of 3853 visits). Overall, most invalid data was due to administration staff oversight in checking-out patients at the end of their appointment (these patients were checked-out the following day). As such, it is likely the invalid data is missing completely at random, as opposed to being missing due to patient or in-clinic time related factors. This is supported by there being no differences in age or sex between invalid and valid data cohorts, and only minimal differences in the proportion of appointment types between the total and valid data cohorts. There are, however, many strengths to this study. Firstly, this study was conducted in a large publicly-funded tertiary referral outpatient ophthalmology service with both inpatient and emergency services, a setting at high risk of facing resource constraints. The fact that the improvements seen in this study were delivered without significant additional capital or ongoing staffing costs increases its applicability to other services with similar characteristics. Secondly, this study had a large sample size, including all patients seen, across a range of subspecialties over the audited periods. This further increases applicability of this study’s results to other large, multi-subspecialty ophthalmology services. Thirdly, as many of the solutions implemented are not specific to ophthalmology, they could potentially be applicable to other outpatient specialties. Finally, by auditing patient wait times over two corresponding five-month periods in the year (February to June), the potential for holiday periods and seasonality confounding the results was reduced.
In summary, this study demonstrates that applying Lean Six Sigma to publicly-funded outpatient ophthalmology clinics can reduce duration and variability of patient in-clinic time and increase service capacity, without significant upfront capital expenditure or ongoing resource requirements. It outlines an approach to applying Lean Six Sigma that may be used in other healthcare contexts and some potential solutions that may be applicable to all outpatient clinics, ophthalmology or otherwise. As demands on healthcare resources continue to increase in the future, Lean Six Sigma techniques may play an increasingly important role in improving the delivery of healthcare services.
|
Structured surgical training in minimally invasive esophagectomy (MIE) increases textbook outcome–a risk-adjusted learning curve | 2d76d73f-3dae-4e60-a769-9648929e9f8c | 11870955 | Laparoscopy[mh] | Data collection Data analysis included demographic details and patient characteristics, including sex, age, BMI, preexisting comorbidities, tumor stage, as well as perioperative data, including operation time, postoperative hospital stay, post-operative complications, 30- and 90-day mortality, and follow-up data (overall survival (OS)). Clavien-Dindo (CD) classification was applied to grade post-operative complications. Complications CD ≥ 3a were defined as major complications. Anastomotic leakage (AL) was defined as endoscopically and/or radiological (computed tomography or X-ray after oral intake of contrast) verified defect of the intestinal wall at the anastomotic site. The criteria for determining a Textbook Outcome (TO) were derived from Busweiler et al. , which included (a) clear resection margins (R0), (b) examination of at least 21 lymph nodes, (c) absence of postoperative complications categorized as Clavien-Dindo ≥ 3b, (d) no surgical re-interventions, (e) no unexpected admissions to ICU/IMC, (f) hospital stay under 21 days, (g) no hospital readmissions within 30 days post-discharge, and (h) absence of mortality within 30 days after the procedure. All data were prospectively collected in a database. Data analysis was performed retrospectively. Data collection was approved by the local ethics committee (EA2/212/23). Inclusion and exclusion criteria A total of 321 patients receiving MIE for EC between 2015 and 2022 in our center were included in this retrospective Analysis, while 191 patients met the inclusion criteria. Only patients with carcinomas of the esophagus or esophagogastric junction AEG I and II (cT1b-4a N0-3 M0), undergoing elective MIE in curative intention were included in the analysis. Patients undergoing MIE in palliative intention were excluded. Cases in which cervical anastomosis was performed or in which esophagectomy was performed with another minimally invasive approach than the described one (e.g., robotic-assisted, hybrid laparoscopic esophagectomy) were also excluded from the analysis (Fig. .). All patients underwent standardized staging diagnostics, including endoscopy and CT scan, and were discussed in our institutional multidisciplinary tumor board prior to surgery. If recommended by the tumor board and according to the current national guideline, patients received neoadjuvant chemo- and/or radiotherapy prior to surgery . Surgical procedure and perioperative management MIE was performed as previously described . In short, laparoscopic gastric mobilization and systematic lymphadenectomy (LAD) was followed by transthoracic minimally invasive esophagectomy with 2-field LAD in the sense of an Ivor Lewis procedure. To restore enteral continuity, gastric pull-up with either circular stapled end-to-side anastomosis or linear stapled side-to-side anastomosis was performed. Circular stapled anastomosis was secured by handsewn V-Loc sutures. The integrity of the anastomosis was confirmed intraoperatively via endoscopy and a nasogastric tube was placed under view. A chest tube was routinely placed. After surgery, all patients were admitted to our specialized intensive care unit for at least 2 days with immediate start of oral fluid intake. Nasogastric tubes were removed on the second day. Subsequently, enteral nutrition was started and adapted according to patient tolerance. Surgeons prior experience Our surgical department comprises two campuses and is accredited as a tertiary referral center for gastroesophageal surgery by the German society for general and visceral surgery (DGAV). MIE was implemented in 2014 by our surgical team. Since the implementation, approximately 40–50 esophageal resections are performed each year, nowadays including 10–15 MIEs, 5–10 hybrid MIEs, and 25–30 robotic-assisted resections. MIE and more recently RAMIE represent the standard curative surgical treatment for patients with EC in our center. Both surgeons, trainer as well as trainee, were experienced specialists for general and visceral surgery with prior expertise in upper GI surgery (Table .). Teaching concept and patient selection during learning period The training program aims to achieve surgical autonomy in MIE for advanced upper GI surgeons with prior experience in gastric and bariatric surgery. Surgical autonomy was considered achieved when the trainee was able to independently perform complete surgical procedure. The criteria for autonomy required that the trainee's outcomes for textbook outcomes (TO), complication rates (including anastomotic leak (AL) and pneumonia), and perioperative morbidity consistently matched those of the trainer as well as international benchmarks. The trainee was considered autonomous when they were able to uphold this standard without any concessions in terms of operative time. This milestone was a key indicator of the trainee’s readiness to transition from supervised operations to performing surgeries independently, ensuring they met both national and international standards of surgical care. The study covers the training program of one trainer surgeon and one trainee surgeon. The trainees underwent a standardized program at our institution outlined in Fig. . All procedures included were performed by both surgeons (trainer and trainee) together. All surgeries were additionally assisted by an upper GI fellow. This pathway includes prior training with a laparoscopic trainer/simulator, followed by bariatric surgery, minimally invasive gastrectomy (MIC), and then minimally invasive esophagectomy (MIE). This sequence does not vary for trainees. In the beginning of the MIE training period, the trainee performed individual sub-steps of the procedure. With growing experience, selected cases were performed by the trainee under supervision and assistance from the trainer (Fig. ). Patient selection was made at the trainer’s discretion based on preoperative patient characteristics affecting technical complexity and perioperative outcome. Those criteria included normal BMI, preoperative tumor size < T2, low ASA score in particular the absence of preexisting cardiac disease and absence of diabetes. Due to this selection bias, risk adjustment was performed in LC analysis for exclusion of any bias. Defining the learning curve (LC) for minimally invasive esophagectomy for esophageal cancer The training program is designed to enable advanced upper GI surgeons to achieve surgical autonomy in minimally invasive esophagectomy (MIE), while maintaining a consistent standard of care and stable rates of complications. The primary objective of the program is to reduce morbidity and mortality which are generally associated with the learning curve. As such, essential outcome metrics like textbook outcomes, anastomotic leak (AL), and morbidity should not change drastically if the program is efficient, operative time functions as a surrogate parameter in this case. To calculate the LC of the two observed surgeons, cumulative sum (CUSUM) was applied. CUSUM is a statistical tool used to monitor small shifts in the mean of a process, effectively identifying deviations from expected performance standards. CUSUM uses a cumulative sum of deviations from a predefined standard to track a surgeon’s performance over time. This approach has been applied especially during training of a new surgeon or implementation of a new surgical technique. Herein, we used RA-CUSUM analysis. Risk adjustment is to define a surgeon’s learning curve according to case mix and the likelihood of adverse events. Due to the application of risk adjustment, RA-CUSUM analysis is less susceptible to outliers or selection bias which helps to identify underlying trends. In our analysis, performance measures included operation times (OT), length of stay (LOS), Nights on ICU, and lymph node yield (LN Yield). Risk-adjusted RA-CUSUM analysis was also performed to assess the LC in terms of anastomotic leakage (AL), major complications (MC, Clavien Dindo ≥ 3a), and achieved textbook outcomes (TO). Risk adjustment was performed based on age, sex, BMI, preoperative tumor size < T2, ASA score and UICC Score, and absence of preexisting heart disease or diabetes. Additional demographic information was not included in either analysis as there were no significant differences between the groups and no significance in terms of target parameters (MC, TO, and AL). Linear and logistic regression models based on these parameters were established and log-transformed data were included in RA-CUSUM analysis. Further statistical analysis To compare datasets regarding patient characteristics and outcomes between trainee and trainer, statistical analyses were conducted using Student’s t test for continuous variables that were confirmed to follow a normal distribution. The normality of continuous variables was validated using the Shapiro–Wilk test. For non-normally distributed variables, non-parametric tests such as the Mann–Whitney U test were applied. Categorical data were compared using the χ 2 test. Multivariate analysis was performed using a nominal logistic regression model. Statistical significance was defined as a p value below 0.05. Survival analysis for overall survival (OS) and disease-free survival (DFS) was performed using the Cox proportional hazards model. All analyses were performed using JMP Pro®, Version 16. SAS Institute Inc., Cary, NC, 1989–2021.
Data analysis included demographic details and patient characteristics, including sex, age, BMI, preexisting comorbidities, tumor stage, as well as perioperative data, including operation time, postoperative hospital stay, post-operative complications, 30- and 90-day mortality, and follow-up data (overall survival (OS)). Clavien-Dindo (CD) classification was applied to grade post-operative complications. Complications CD ≥ 3a were defined as major complications. Anastomotic leakage (AL) was defined as endoscopically and/or radiological (computed tomography or X-ray after oral intake of contrast) verified defect of the intestinal wall at the anastomotic site. The criteria for determining a Textbook Outcome (TO) were derived from Busweiler et al. , which included (a) clear resection margins (R0), (b) examination of at least 21 lymph nodes, (c) absence of postoperative complications categorized as Clavien-Dindo ≥ 3b, (d) no surgical re-interventions, (e) no unexpected admissions to ICU/IMC, (f) hospital stay under 21 days, (g) no hospital readmissions within 30 days post-discharge, and (h) absence of mortality within 30 days after the procedure. All data were prospectively collected in a database. Data analysis was performed retrospectively. Data collection was approved by the local ethics committee (EA2/212/23).
A total of 321 patients receiving MIE for EC between 2015 and 2022 in our center were included in this retrospective Analysis, while 191 patients met the inclusion criteria. Only patients with carcinomas of the esophagus or esophagogastric junction AEG I and II (cT1b-4a N0-3 M0), undergoing elective MIE in curative intention were included in the analysis. Patients undergoing MIE in palliative intention were excluded. Cases in which cervical anastomosis was performed or in which esophagectomy was performed with another minimally invasive approach than the described one (e.g., robotic-assisted, hybrid laparoscopic esophagectomy) were also excluded from the analysis (Fig. .). All patients underwent standardized staging diagnostics, including endoscopy and CT scan, and were discussed in our institutional multidisciplinary tumor board prior to surgery. If recommended by the tumor board and according to the current national guideline, patients received neoadjuvant chemo- and/or radiotherapy prior to surgery .
MIE was performed as previously described . In short, laparoscopic gastric mobilization and systematic lymphadenectomy (LAD) was followed by transthoracic minimally invasive esophagectomy with 2-field LAD in the sense of an Ivor Lewis procedure. To restore enteral continuity, gastric pull-up with either circular stapled end-to-side anastomosis or linear stapled side-to-side anastomosis was performed. Circular stapled anastomosis was secured by handsewn V-Loc sutures. The integrity of the anastomosis was confirmed intraoperatively via endoscopy and a nasogastric tube was placed under view. A chest tube was routinely placed. After surgery, all patients were admitted to our specialized intensive care unit for at least 2 days with immediate start of oral fluid intake. Nasogastric tubes were removed on the second day. Subsequently, enteral nutrition was started and adapted according to patient tolerance.
Our surgical department comprises two campuses and is accredited as a tertiary referral center for gastroesophageal surgery by the German society for general and visceral surgery (DGAV). MIE was implemented in 2014 by our surgical team. Since the implementation, approximately 40–50 esophageal resections are performed each year, nowadays including 10–15 MIEs, 5–10 hybrid MIEs, and 25–30 robotic-assisted resections. MIE and more recently RAMIE represent the standard curative surgical treatment for patients with EC in our center. Both surgeons, trainer as well as trainee, were experienced specialists for general and visceral surgery with prior expertise in upper GI surgery (Table .).
The training program aims to achieve surgical autonomy in MIE for advanced upper GI surgeons with prior experience in gastric and bariatric surgery. Surgical autonomy was considered achieved when the trainee was able to independently perform complete surgical procedure. The criteria for autonomy required that the trainee's outcomes for textbook outcomes (TO), complication rates (including anastomotic leak (AL) and pneumonia), and perioperative morbidity consistently matched those of the trainer as well as international benchmarks. The trainee was considered autonomous when they were able to uphold this standard without any concessions in terms of operative time. This milestone was a key indicator of the trainee’s readiness to transition from supervised operations to performing surgeries independently, ensuring they met both national and international standards of surgical care. The study covers the training program of one trainer surgeon and one trainee surgeon. The trainees underwent a standardized program at our institution outlined in Fig. . All procedures included were performed by both surgeons (trainer and trainee) together. All surgeries were additionally assisted by an upper GI fellow. This pathway includes prior training with a laparoscopic trainer/simulator, followed by bariatric surgery, minimally invasive gastrectomy (MIC), and then minimally invasive esophagectomy (MIE). This sequence does not vary for trainees. In the beginning of the MIE training period, the trainee performed individual sub-steps of the procedure. With growing experience, selected cases were performed by the trainee under supervision and assistance from the trainer (Fig. ). Patient selection was made at the trainer’s discretion based on preoperative patient characteristics affecting technical complexity and perioperative outcome. Those criteria included normal BMI, preoperative tumor size < T2, low ASA score in particular the absence of preexisting cardiac disease and absence of diabetes. Due to this selection bias, risk adjustment was performed in LC analysis for exclusion of any bias.
The training program is designed to enable advanced upper GI surgeons to achieve surgical autonomy in minimally invasive esophagectomy (MIE), while maintaining a consistent standard of care and stable rates of complications. The primary objective of the program is to reduce morbidity and mortality which are generally associated with the learning curve. As such, essential outcome metrics like textbook outcomes, anastomotic leak (AL), and morbidity should not change drastically if the program is efficient, operative time functions as a surrogate parameter in this case. To calculate the LC of the two observed surgeons, cumulative sum (CUSUM) was applied. CUSUM is a statistical tool used to monitor small shifts in the mean of a process, effectively identifying deviations from expected performance standards. CUSUM uses a cumulative sum of deviations from a predefined standard to track a surgeon’s performance over time. This approach has been applied especially during training of a new surgeon or implementation of a new surgical technique. Herein, we used RA-CUSUM analysis. Risk adjustment is to define a surgeon’s learning curve according to case mix and the likelihood of adverse events. Due to the application of risk adjustment, RA-CUSUM analysis is less susceptible to outliers or selection bias which helps to identify underlying trends. In our analysis, performance measures included operation times (OT), length of stay (LOS), Nights on ICU, and lymph node yield (LN Yield). Risk-adjusted RA-CUSUM analysis was also performed to assess the LC in terms of anastomotic leakage (AL), major complications (MC, Clavien Dindo ≥ 3a), and achieved textbook outcomes (TO). Risk adjustment was performed based on age, sex, BMI, preoperative tumor size < T2, ASA score and UICC Score, and absence of preexisting heart disease or diabetes. Additional demographic information was not included in either analysis as there were no significant differences between the groups and no significance in terms of target parameters (MC, TO, and AL). Linear and logistic regression models based on these parameters were established and log-transformed data were included in RA-CUSUM analysis.
To compare datasets regarding patient characteristics and outcomes between trainee and trainer, statistical analyses were conducted using Student’s t test for continuous variables that were confirmed to follow a normal distribution. The normality of continuous variables was validated using the Shapiro–Wilk test. For non-normally distributed variables, non-parametric tests such as the Mann–Whitney U test were applied. Categorical data were compared using the χ 2 test. Multivariate analysis was performed using a nominal logistic regression model. Statistical significance was defined as a p value below 0.05. Survival analysis for overall survival (OS) and disease-free survival (DFS) was performed using the Cox proportional hazards model. All analyses were performed using JMP Pro®, Version 16. SAS Institute Inc., Cary, NC, 1989–2021.
Patient characteristics and histopathology The comprehensive overview of preoperative patient characteristics and histopathological findings is presented in Table . An examination of key demographic variables across the groups, which includes age, gender distribution, and preoperative comorbidities, revealed no statistically significant differences. Similarly, our analysis of histopathological data, encompassing parameters such as tumor stage, nodal involvement, and tumor differentiation, indicated a lack of substantial variations among the groups apart from advanced tumor size (pT4; Trainer vs. Trainee, 0.00% vs. 6.59% ( p = 0.027)). This was accounted for in subsequent LC analysis by risk adjustment. These cases were performed at the end of the learning curve and may reflect the increased implementation of the laparoscopic approach in locally progressive disease. Postoperative outcome Perioperative patient outcomes in both groups are summarized in Table . Our investigation reveals a comparable median duration of resection (Trainer vs. Trainee, 404.40 min vs. 383.07 min ( p = 0.085)). Similarly, there was no significant difference in terms of the median count of lymph nodes extracted (Trainer vs. Trainee, 29.5 [25–37.75] vs. 32 [26–39] ( p = 0.0789)). The observed trends toward shorter operation times and higher LN yields may reflect an overall continuous improvement of the general technique during the observational period of 7 years. The occurrence of positive resection margins depicted similar trends (Trainer vs. Trainee, 6.00% vs. 3.30% ( p = 0.281)), as did the need for intraoperative red blood cell transfusions (Trainer vs. Trainee, 11.00% vs. 6.59% ( p = 0.281)). Regarding complications, the incidence of major complications showed no significant difference under the care of the trainee (Trainer vs. Trainee, 25.00% vs. 14.29% ( p = 0.062)). There was no difference in terms of pneumonia (Trainer vs. Trainee, 27.00% vs. 19.78% ( p = 0.239)), cardiovascular complications (Trainer vs. Trainee, 6.00% vs. 3.30% ( p = 0.373)), or anastomotic leak (Trainer vs. Trainee, 10.00% vs. 7.69% ( p = 0.575)). Despite similar rates of complications, a significantly shorter ICU stay was seen in patients with the operation performed by the trainee (Trainer vs. Trainee, 3 [2.25–6.75] days vs. 3 [ – ] days ( p = 0.0359)), and a trend toward a shorter duration of hospital stay (Trainer vs. Trainee, 16.5 [13–31.5] days vs. 16 [12–22] days ( p = 0.0361)). These differences again are accounted for by the continuous improvement of postoperative patient management supporting enhanced recovery. No difference was seen in 30-day mortality rates (Trainer vs. Trainee, 1.15% vs. 0.00% ( p = 0.256)). Finally, the occurrence of textbook outcomes was significantly higher in the trainee cohort (Trainer vs. Trainee, 27.00% vs. 40.66% ( p = 0.046)). Learning curves for trainee and trainer A comprehensive analysis of the RA-CUSUM results reveals intriguing patterns in operation duration for both the trainer and trainee. Initially, the trainee exhibited longer operation times compared to the trainer, as shown by the higher consecutive values on the CUSUM chart (Fig. ). However, the mean operating time did not differ between the trainee and trainer across the entire cohorts (Trainer 404.40 (± 75.80) vs. Trainee 383.07 (± 84.88), p = 0.085). The trainer, despite maintaining a steady operation duration at the outset, displayed enhanced efficiency in terms of OT after an additional 80 cases (Fig. ). A comparative analysis between the initial and the final outcomes (initial 10 vs. final 10 cases) (Fig. ) showed that both the operative times of the trainee and trainer significantly decreased during the training period. The OT of the trainer decreased from 436,4(± 34,77) to 331,4(± 72,00) ( p < 0.001), while the OT of the trainee decreased from 504,4 (± 45,20) to 369,3(± 78,20) ( p < 0.001). The RA-CUSUM analysis for Major Complications (Fig. ) revealed a distinctive LC with respect to MC rates. Throughout the learning period, the trainee consistently achieved MC rates comparable to those of the trainer. Following risk adjustment, we observed a noteworthy improvement in the trainee's MC incidence after the completion of 55 consecutive cases. This may reflect the influence of patient selection during the beginning of the learning curve. However, a clear LC was evident in technical operative skills after 45 cases (operative time, Fig. ). Interestingly, no LC was seen in terms of AL with consistent rates seen in both the trainee and trainer cohort (Fig. ). Both the trainer and trainee consistently maintained comparable and stable AL rates of 10.0% and 7.7%, respectively ( p = 0.575). Of note, the anastomotic technique was adjusted during the teaching period from a 25 mm to a 29 mm circular anastomosis, thereby potentially influencing the AL rates in the latter period positively . Examining the composite parameter TO (Fig. ), our investigation also revealed a learning curve after 83 cases. The rate of TO achievement by the trainee never dropped below that of the trainer. Finally, the occurrence of textbook outcomes was significantly higher in the trainee cohort overall (Trainer vs. Trainee, 27.00% vs. 40.66% ( p = 0.046)). It is apparent that this parameter encompasses multifaceted aspects of surgical management and is considerably influenced by factors beyond the realm of operative performance only. Survival To ensure an adequate follow-up time of at least 2 years, only patients operated between 2015 and 2020 were included in the survival analysis. Survival analysis comprised a total of 97 patients, 66 in the Trainer group and 31 in the Trainee group. The study encompassed a mean follow-up duration of 1094 (+ 580) days in the trainer group and 641(+ 484) days in the trainee group. The 1-year overall survival rates stood at 97.84% for the Trainer group and 98.80% for the Trainee group, while the 3-year survival rates were 82.88% and 82.71%, respectively ( p = 0.436). The 1-year disease-free survival rates were 84.42% and 90.81%, while the 3-year survival rates were 66.03% and 75.67%, respectively ( p = 0.305). Notably, no statistically significant distinctions emerged in terms of survival rates, highlighting a comparable outcome between the Trainer and Trainee groups (Fig. .). This indicated that the learning curve did not compromise oncological or surgical safety of the operation.
The comprehensive overview of preoperative patient characteristics and histopathological findings is presented in Table . An examination of key demographic variables across the groups, which includes age, gender distribution, and preoperative comorbidities, revealed no statistically significant differences. Similarly, our analysis of histopathological data, encompassing parameters such as tumor stage, nodal involvement, and tumor differentiation, indicated a lack of substantial variations among the groups apart from advanced tumor size (pT4; Trainer vs. Trainee, 0.00% vs. 6.59% ( p = 0.027)). This was accounted for in subsequent LC analysis by risk adjustment. These cases were performed at the end of the learning curve and may reflect the increased implementation of the laparoscopic approach in locally progressive disease.
Perioperative patient outcomes in both groups are summarized in Table . Our investigation reveals a comparable median duration of resection (Trainer vs. Trainee, 404.40 min vs. 383.07 min ( p = 0.085)). Similarly, there was no significant difference in terms of the median count of lymph nodes extracted (Trainer vs. Trainee, 29.5 [25–37.75] vs. 32 [26–39] ( p = 0.0789)). The observed trends toward shorter operation times and higher LN yields may reflect an overall continuous improvement of the general technique during the observational period of 7 years. The occurrence of positive resection margins depicted similar trends (Trainer vs. Trainee, 6.00% vs. 3.30% ( p = 0.281)), as did the need for intraoperative red blood cell transfusions (Trainer vs. Trainee, 11.00% vs. 6.59% ( p = 0.281)). Regarding complications, the incidence of major complications showed no significant difference under the care of the trainee (Trainer vs. Trainee, 25.00% vs. 14.29% ( p = 0.062)). There was no difference in terms of pneumonia (Trainer vs. Trainee, 27.00% vs. 19.78% ( p = 0.239)), cardiovascular complications (Trainer vs. Trainee, 6.00% vs. 3.30% ( p = 0.373)), or anastomotic leak (Trainer vs. Trainee, 10.00% vs. 7.69% ( p = 0.575)). Despite similar rates of complications, a significantly shorter ICU stay was seen in patients with the operation performed by the trainee (Trainer vs. Trainee, 3 [2.25–6.75] days vs. 3 [ – ] days ( p = 0.0359)), and a trend toward a shorter duration of hospital stay (Trainer vs. Trainee, 16.5 [13–31.5] days vs. 16 [12–22] days ( p = 0.0361)). These differences again are accounted for by the continuous improvement of postoperative patient management supporting enhanced recovery. No difference was seen in 30-day mortality rates (Trainer vs. Trainee, 1.15% vs. 0.00% ( p = 0.256)). Finally, the occurrence of textbook outcomes was significantly higher in the trainee cohort (Trainer vs. Trainee, 27.00% vs. 40.66% ( p = 0.046)).
A comprehensive analysis of the RA-CUSUM results reveals intriguing patterns in operation duration for both the trainer and trainee. Initially, the trainee exhibited longer operation times compared to the trainer, as shown by the higher consecutive values on the CUSUM chart (Fig. ). However, the mean operating time did not differ between the trainee and trainer across the entire cohorts (Trainer 404.40 (± 75.80) vs. Trainee 383.07 (± 84.88), p = 0.085). The trainer, despite maintaining a steady operation duration at the outset, displayed enhanced efficiency in terms of OT after an additional 80 cases (Fig. ). A comparative analysis between the initial and the final outcomes (initial 10 vs. final 10 cases) (Fig. ) showed that both the operative times of the trainee and trainer significantly decreased during the training period. The OT of the trainer decreased from 436,4(± 34,77) to 331,4(± 72,00) ( p < 0.001), while the OT of the trainee decreased from 504,4 (± 45,20) to 369,3(± 78,20) ( p < 0.001). The RA-CUSUM analysis for Major Complications (Fig. ) revealed a distinctive LC with respect to MC rates. Throughout the learning period, the trainee consistently achieved MC rates comparable to those of the trainer. Following risk adjustment, we observed a noteworthy improvement in the trainee's MC incidence after the completion of 55 consecutive cases. This may reflect the influence of patient selection during the beginning of the learning curve. However, a clear LC was evident in technical operative skills after 45 cases (operative time, Fig. ). Interestingly, no LC was seen in terms of AL with consistent rates seen in both the trainee and trainer cohort (Fig. ). Both the trainer and trainee consistently maintained comparable and stable AL rates of 10.0% and 7.7%, respectively ( p = 0.575). Of note, the anastomotic technique was adjusted during the teaching period from a 25 mm to a 29 mm circular anastomosis, thereby potentially influencing the AL rates in the latter period positively . Examining the composite parameter TO (Fig. ), our investigation also revealed a learning curve after 83 cases. The rate of TO achievement by the trainee never dropped below that of the trainer. Finally, the occurrence of textbook outcomes was significantly higher in the trainee cohort overall (Trainer vs. Trainee, 27.00% vs. 40.66% ( p = 0.046)). It is apparent that this parameter encompasses multifaceted aspects of surgical management and is considerably influenced by factors beyond the realm of operative performance only.
To ensure an adequate follow-up time of at least 2 years, only patients operated between 2015 and 2020 were included in the survival analysis. Survival analysis comprised a total of 97 patients, 66 in the Trainer group and 31 in the Trainee group. The study encompassed a mean follow-up duration of 1094 (+ 580) days in the trainer group and 641(+ 484) days in the trainee group. The 1-year overall survival rates stood at 97.84% for the Trainer group and 98.80% for the Trainee group, while the 3-year survival rates were 82.88% and 82.71%, respectively ( p = 0.436). The 1-year disease-free survival rates were 84.42% and 90.81%, while the 3-year survival rates were 66.03% and 75.67%, respectively ( p = 0.305). Notably, no statistically significant distinctions emerged in terms of survival rates, highlighting a comparable outcome between the Trainer and Trainee groups (Fig. .). This indicated that the learning curve did not compromise oncological or surgical safety of the operation.
Minimally Invasive Esophagectomy (MIE) presents significant learning challenges but offers key benefits for esophageal and gastroesophageal junction (GEJ) cancer patients. Despite the learning curve (LC) inherent to even seasoned surgeons, prioritizing patient safety and optimal outcomes remains paramount. Structured training, selective patient assignment, and ongoing quality assurance are essential to enhance teaching effectiveness and maintain brief LCs. Our institutional review of 191 MIE procedures (2015–2022), including a trainee working with an experienced surgeon, showed no notable difference in postoperative complications, overall survival or TO throughout the LC, demonstrating the viability of our educational strategy. Operating time is a common performance metric used in LC evaluation. In the literature, operative durations ranging from 237 to 443 min have been described for MIE . The reported learning curve in previous studies ranges from 80 to 119 [ – ]. In our study, the trainee completed the learning curve within 45 cases with a mean operating time of 383.07 (± 84.88). While one pervious study reports similar values , this is generally shorter than the LC reported in the literature [ – ]. Of note, during the observed period, 51 patients with EC underwent hybrid minimally invasive esophagectomy (open technique for abdominal part) and 54 underwent robotic-MIE (RAMIE) by the same surgical team, including trainer and trainee, adding further experience . These cases were not accounted for in our study and may contribute to the comparably shorter learning curve observed in terms of operation time compared to the literature. Interestingly, a trend toward a further improvement of the operating time could also be seen for the trainer after 63 cases despite proficiency. The continued refinement in operating time by the trainer possibly underscores ongoing skill enhancement driven by increased experience and familiarity with the procedure. Research by Valsangkar et al. suggests that neither short nor intermediate lengths of surgery time consistently predict postoperative results following Ivor Lewis esophagectomy . This finding casts doubt on the reliability of operative time as a solitary metric for gauging the learning curve or shifts in performance levels. Interestingly, the learning curve for operative time (most specifically representing technical proficiency of the surgeon) is considerably shorter than the learning curve for other perioperative parameters observed in our study. The longer LC observed in MC or TO compared to operative time underscores the intricate interplay between mastering the technical skills required and the multifaceted dimensions of surgical proficiency. This prolonged trajectory signifies the complex nature of acquiring surgical judgment and ability to provide the comprehensive perioperative care involved in complication management. Using the trainer´s surgical results as the outcome target to be achieved, we observed that the learning curve was achieved after 55 cases in terms of major complications after risk adjustment. The results of our study partly differ from the experiences of other surgical centers, which analyzed LCs for MIE during the implementation. In contrast with our observations, two multicenter studies by van Workum et al. in 2019 and Claassen et al. in 2022 showed increased morbidity during MIE LCs . These studies report the achievement of the learning curve in terms of major complications after 34 and 119 cases, respectively, indicating a longer learning curve than that seen in our cohort [ , , ]. An explanation for this might be that the mentioned studies analyzed the LC during newly implementation of MIE in different surgical centers. In contrast, our study analyzed the LC of a learning surgeon in a center in which MIE was already an established procedure. Furthermore, Claassen et al. included patients with tumors localized in the middle third of the esophagus, which could also contribute to the discrepant LC results. In contrast to other authors, we excluded patients with cervical anastomosis in our study. Finally, the additional experience of the surgeons in Hybrid-MIE and RAMIE may have augmented the learning experience . In our study, we did not observe a significant LC for anastomotic leakage. Van Workum et al . and Claassen et al . described the number of performed cases of 119 and 131 to achieve AL rates of 8% and 14%, respectively . Considering the number of cases performed during the time observed in our study (100 cases performed by the trainer, 91 cases performed by the trainee), we cannot confirm these values. However, we observed stable and comparable AL rates for both trainer and trainee, respectively. Using the values of the trainer as a benchmark, AL rates of the trainee remained in the target range throughout the study. The style of anastomosis was adapted throughout the learning period from 25 to 29 mm circular anastomosis. A 29 mm esophagogastrostomy has been linked to reduced rates of AL . While we saw a trend toward reduced AL rates after 60 cases, this trend was not sufficient to result in a shift in means detected by RA-CUSUM analysis. Despite the evolving proficiency gain demonstrated in other domains, TO also remained resilient to the characteristic learning curve trajectory observed in this study. In contrast, Claassen et al. report a learning curve in terms of TO of 46 cases . The rate of TO consistently achieved by the trainee during the whole learning period was in line with those described in the literature. A rate of TO of 30.7% was previously described in a large international study by the Oesophago-Gastric Anastomotic Audit (OGAA) Collaborative [ – ]. Furthermore, the rate of TO achieved by the trainee did not fall below that of the trainer at any point throughout the learning curve. A further improvement of TO incidence was observed after the performance of 83 consecutive cases by the trainee. TO as composite measure of surgical outcomes defines the best possible outcome of a surgical procedure . Our results suggest that the intricacies of patient care, post-operative recovery, and potential medical interventions contribute significantly to this parameter . This underscores the complex, multifactorial nature of TO as a performance metric, and highlights the non-operative skill acquisition required by surgeons for optimal complication management. Our study's methodology involved a highly selective patient screening process, where patients with a tumor size > T2, ASA > 2 score, prior cardiac disease or diabetes, or a BMI > 30 were identified as higher risk and were therefore more likely to have their resections performed by the experienced surgeon only. While this approach minimized the risks associated with surgical training, it introduced an unavoidable selection bias in our results. This bias is particularly evident in the significantly higher numbers of postoperative days in the ICU and longer hospital stays (LOS) for patients operated by the trainer, likely because more complex cases were indeed handled by more experienced hands. Additionally, due to the long observational period in our study, this also reflects the adoption of ERAS principles throughout our study period. During the earlier phases of our study period, there was a more cautious postoperative approach, particularly in complex cases handled by trainers, leading to longer ICU stays. Over time, our institution gradually integrated Enhanced Recovery After Surgery (ERAS) protocols, which include immediate mobilization, intensive conservative respiratory management, and enhanced physiotherapy. Moreover, the inclusion of only one trainee in our analysis significantly impacts the generalizability of our findings. The insights gained are inherently limited by this single-trainee framework, restricting the applicability of our results to broader educational settings or diverse surgical teams. This aspect of our study design should prompt caution when extrapolating our findings to other institutions or training programs. In addition, esophagectomies involving other related procedures were systematically excluded from our study, potentially introducing additional bias. This exclusion might skew the complexity and outcomes associated with standard esophagectomies and could mask the true challenges encountered in surgeries that involve more extensive operative scopes. Risk adjustment was implemented in our RA-CUSUM analysis to mitigate some of the biases; however, this was not extended to the comparison of event distributions across the entire cohort (Table and Table ). The study’s case volume, while aligned with the annual benchmark for German surgical centers , remains limited, and therefore, the absence of significant differences in major complications rates should not be over-interpreted. The small sample size could underpower statistical tests, potentially overlooking real differences or misinterpreting the surgical outcomes. These limitations underscore the need for further studies with larger, more diverse cohorts and a variety of training setups to validate and possibly extend our findings.
Based on the results of the analysis of the data of 191 patients undergoing MIE in our center between 2015 and 2022, we state that MIE can be trained and learned by experienced upper gastrointestinal surgeons without compromising oncological and surgical outcome in patients with resectable EC/CGJ. Structured surgical training facilitates consistent perioperative outcomes throughout the learning period. This study emphasizes the importance of structured training programs and continued risk assessment for enhanced patient care.
|
The effects of repeated inhaler device handling education in COPD patients: a prospective cohort study | 13383aca-4607-421b-81a8-d9e6551c14d9 | 7665176 | Patient Education as Topic[mh] | Chronic obstructive pulmonary disease (COPD) exhibits many different phenotypes, and the prevalence ranged from 12.9 to 17.2% in the Korean National Health and Nutritional Examination Survey II (KNHANES II) , . Correct inhaler use is important: incorrect use is associated with an increased risk of acute exacerbation, hospital admission, emergency room visits, and a need for antimicrobials and oral steroids – . However, in the real world, inhaler mishandling and poor adherence are very common, despite the fact that most COPD patients receive education on inhaler use , , . Many studies have shown that education reduces inhaler mishandling, significantly improving inhaler technique – . Quality of life refers to satisfaction or happiness in aspects of life when an individual is affected by their health , . Quality of life of COPD patients was lower than that of the general population. High severity of COPD, depression, and osteoporosis were associated with lower quality of life in Korean COPD patients . Patient satisfaction with inhaler device is associated with patient adherence and clinical outcomes. In a large, multinational, cross-sectional, real-world survey with COPD patients, significant association was reported between inhaler satisfaction and treatment adherence. Furthermore, there was a direct association between inhaler satisfaction and fewer COPD exacerbations . Few studies have examined the association between inhaler education and quality of life , – ; no study has explored the relationship between inhaler education and inhaler satisfaction. Thus, we evaluated inhaler handling and adherence, and changes in quality of life and inhaler satisfaction, after repeated education for COPD patients.
Study design and subjects This prospective study was conducted in the pulmonology outpatient department of the Regional Center for Respiratory Diseases, Yeungnam University Hospital (a tertiary university hospital in Daegu, South Korea) from January 2018 to May 2019. Patients aged over 40 years and diagnosed with COPD were initially enrolled, and all those who had used inhalers of any kind for more than 1 month were recruited to the study. The intervention included three visits over 6 months; follow-up visits were performed every 3 months. In total, 72 patients were excluded for the following reasons: inhaler device changed during the study period (n = 30); lost to follow-up (n = 40); and did not complete the three visits (n = 2). COPD patients who completed 3 visits and maintained the same inhaler device during study period were finally analyzed. Finally, 261 patients using 308 inhalers were included (Fig. ). The inhalers included the Turbuhaler, Breezhaler, Ellipta, Diskus, Genuair, Respimat, and pressurized metered dose inhalers (pMDI) models. We excluded patients using a pMDI with a spacer, using other inhalers, those with advanced cancer, and pregnant females. Patient visits During the study, patients who agreed to the study were enrolled among all COPD patients who visited our respiratory outpatient clinic. The intervention included three visits over 6 months; follow-up visits were performed every 3 months. All patients had undergone pulmonary function tests within the 3 months prior to enrolment. At visit 1 (baseline), written informed consent was obtained from all patients. A general questionnaire exploring age, sex, body mass index, smoking status, COPD duration, previous inhaler education, previous COPD education, and educational level was administered. The modified Medical Research Council test (mMRC) , the COPD Assessment Test (CAT) , the Mini-Mental State Examination (MMSE) , the EuroQol-5D (EQ-5D) instrument , the Patient Health Questionnaire (PHQ-9) , to assess the quality of life and the Feeling of Satisfaction with Inhaler questionnaire (FSI-10) , were administered at the first visit. All questionnaires were available free online. An advanced practice nurse assessed inhaler technique and adherence, and delivered face-to-face training using the “teach-back” technique, in which the nurse says: “Can you show me what I showed you and explain it to me?” “Teach-back” is a technique that requires patients to explain or demonstrate their skills back after training . Repetitive training using the “teach-back” technique was conducted in visit 1 until the patient fully understands the inhaler device and fully explain the operation of the inhaler. At visits 2 and 3, the nurse re-assessed inhaler technique and adherence and delivered face-to-face training using “teach-back” technique if any error was apparent. At visit 3, we re-administered the mMRC, CAT, EQ-5D, and PHQ-9, to assess changes in quality of life, and the FSI-10. Data collection and definitions An advanced practice nurse specializing in inhaler education performed all of the interviews and training sessions . The nurse was educated by our COPD specialists and had trained COPD patients in inhaler techniques for 3 years. Critical errors were defined as errors seriously compromising drug delivery to the lung. We created a standardized checklist of inhaler use critical steps by reference to the review literature . The critical errors are listed in Table . Adherence was self-reported and graded as good, partial, or poor, according to whether the entire daily dose was taken, the daily dose (frequency or amount) taken was more or less than required, and the medication was taken only as needed or not at all, respectively . The FSI-10 (10 questions) is a validated self-administered questionnaire evaluating patient satisfaction with their inhaler , . The answer options range from “hardly at all” (score of 1 on a 5-point Likert scale) to 5 “very” (score of 5); the total score thus ranges from 10 to 50; higher scores indicate better satisfaction. Inhaler convenience, maintenance, portability, and “feel” are all assessed by the FSI-10. Statistical analysis Continuous variables are expressed as means ± standard deviations (SDs) and were compared using Student’s t -test or the Mann–Whitney U test. Categorical variables were compared using the chi-squared test or Fisher’s exact test. In all analyses, a two-tailed p-value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS software (ver. 24.0; SPSS Inc., Chicago, IL, USA). A prospective power calculation indicated that an overall sample size of 220 was required to evaluate the efficacy of education (95% power, α = 0.05, effect size = 0.3). To allow for dropout, we sought to enroll 260 patients . Ethics approval and consent to participate This study was conducted in accordance with all relevant tenets of the Declaration of Helsinki. The protocol was reviewed and approved by the institutional review board of our hospital (Yeungnam University Hospital Institutional Review Board 2017-09-012-001). Written informed consent was obtained from all patients.
This prospective study was conducted in the pulmonology outpatient department of the Regional Center for Respiratory Diseases, Yeungnam University Hospital (a tertiary university hospital in Daegu, South Korea) from January 2018 to May 2019. Patients aged over 40 years and diagnosed with COPD were initially enrolled, and all those who had used inhalers of any kind for more than 1 month were recruited to the study. The intervention included three visits over 6 months; follow-up visits were performed every 3 months. In total, 72 patients were excluded for the following reasons: inhaler device changed during the study period (n = 30); lost to follow-up (n = 40); and did not complete the three visits (n = 2). COPD patients who completed 3 visits and maintained the same inhaler device during study period were finally analyzed. Finally, 261 patients using 308 inhalers were included (Fig. ). The inhalers included the Turbuhaler, Breezhaler, Ellipta, Diskus, Genuair, Respimat, and pressurized metered dose inhalers (pMDI) models. We excluded patients using a pMDI with a spacer, using other inhalers, those with advanced cancer, and pregnant females.
During the study, patients who agreed to the study were enrolled among all COPD patients who visited our respiratory outpatient clinic. The intervention included three visits over 6 months; follow-up visits were performed every 3 months. All patients had undergone pulmonary function tests within the 3 months prior to enrolment. At visit 1 (baseline), written informed consent was obtained from all patients. A general questionnaire exploring age, sex, body mass index, smoking status, COPD duration, previous inhaler education, previous COPD education, and educational level was administered. The modified Medical Research Council test (mMRC) , the COPD Assessment Test (CAT) , the Mini-Mental State Examination (MMSE) , the EuroQol-5D (EQ-5D) instrument , the Patient Health Questionnaire (PHQ-9) , to assess the quality of life and the Feeling of Satisfaction with Inhaler questionnaire (FSI-10) , were administered at the first visit. All questionnaires were available free online. An advanced practice nurse assessed inhaler technique and adherence, and delivered face-to-face training using the “teach-back” technique, in which the nurse says: “Can you show me what I showed you and explain it to me?” “Teach-back” is a technique that requires patients to explain or demonstrate their skills back after training . Repetitive training using the “teach-back” technique was conducted in visit 1 until the patient fully understands the inhaler device and fully explain the operation of the inhaler. At visits 2 and 3, the nurse re-assessed inhaler technique and adherence and delivered face-to-face training using “teach-back” technique if any error was apparent. At visit 3, we re-administered the mMRC, CAT, EQ-5D, and PHQ-9, to assess changes in quality of life, and the FSI-10.
An advanced practice nurse specializing in inhaler education performed all of the interviews and training sessions . The nurse was educated by our COPD specialists and had trained COPD patients in inhaler techniques for 3 years. Critical errors were defined as errors seriously compromising drug delivery to the lung. We created a standardized checklist of inhaler use critical steps by reference to the review literature . The critical errors are listed in Table . Adherence was self-reported and graded as good, partial, or poor, according to whether the entire daily dose was taken, the daily dose (frequency or amount) taken was more or less than required, and the medication was taken only as needed or not at all, respectively . The FSI-10 (10 questions) is a validated self-administered questionnaire evaluating patient satisfaction with their inhaler , . The answer options range from “hardly at all” (score of 1 on a 5-point Likert scale) to 5 “very” (score of 5); the total score thus ranges from 10 to 50; higher scores indicate better satisfaction. Inhaler convenience, maintenance, portability, and “feel” are all assessed by the FSI-10.
Continuous variables are expressed as means ± standard deviations (SDs) and were compared using Student’s t -test or the Mann–Whitney U test. Categorical variables were compared using the chi-squared test or Fisher’s exact test. In all analyses, a two-tailed p-value < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS software (ver. 24.0; SPSS Inc., Chicago, IL, USA). A prospective power calculation indicated that an overall sample size of 220 was required to evaluate the efficacy of education (95% power, α = 0.05, effect size = 0.3). To allow for dropout, we sought to enroll 260 patients .
This study was conducted in accordance with all relevant tenets of the Declaration of Helsinki. The protocol was reviewed and approved by the institutional review board of our hospital (Yeungnam University Hospital Institutional Review Board 2017-09-012-001). Written informed consent was obtained from all patients.
Baseline characteristics Patient baseline characteristics are listed in Table . The mean age was 69.8 years and males predominated (93.5%). The mean body mass index was 23.5 kg/m 2 and the mean COPD duration was 3.6 years. In total, 47 (18.0%) patients were current smokers and 179 (68.6%) were ex-smokers; 95.4% had received previous education on COPD and inhaler handling. One-third of the patients were poorly educated. Most exhibited mild-to-moderate airflow limitation (63.5 ± 17.5% of the predicted forced expiratory volume in 1 s [FEV 1 ]). The mean mMRC and CAT scores were 1.3 ± 0.9 and 9.9 ± 5.6 respectively. The mean MMSE score was 29.3 ± 1.6. Inhaler use/adherence before and after education A total of 261 COPD patients using 308 inhaler devices were enrolled. The percentages of patients exhibiting at least one critical error during inhaler use, before and after education, are listed in Fig. . At visit 1, 43.2% (133/308) showed at least one critical error. After two educational visits, these values fell to 8.8% (27/308); education improved the use of all included inhalers (Table ). All critical errors were reduced after repeated education. In terms of adherence, the proportion of good compliers increased after two educational sessions, from 81.6 to 87.7% (p = 0.005; Fig. ). Quality of life before and after education We compared the quality of life before (visit 1) and after (visit 3) education. The scores on the mMRC, CAT, EQ-5D, or PHQ-9 did not improve significantly (Table ). Each parameters in the EQ-5D domain did not show any significant improvement after education (Table ). Inhaler satisfaction Table shows the inhaler satisfaction scores before and after education. Scores on all 10 items of the FSI-10 (all 10 items p < 0.001 respectively), and the overall score (44.36 ± 4.69 to 47.64 ± 4.08, p < 0.001), improved significantly after two educational sessions.
Patient baseline characteristics are listed in Table . The mean age was 69.8 years and males predominated (93.5%). The mean body mass index was 23.5 kg/m 2 and the mean COPD duration was 3.6 years. In total, 47 (18.0%) patients were current smokers and 179 (68.6%) were ex-smokers; 95.4% had received previous education on COPD and inhaler handling. One-third of the patients were poorly educated. Most exhibited mild-to-moderate airflow limitation (63.5 ± 17.5% of the predicted forced expiratory volume in 1 s [FEV 1 ]). The mean mMRC and CAT scores were 1.3 ± 0.9 and 9.9 ± 5.6 respectively. The mean MMSE score was 29.3 ± 1.6.
A total of 261 COPD patients using 308 inhaler devices were enrolled. The percentages of patients exhibiting at least one critical error during inhaler use, before and after education, are listed in Fig. . At visit 1, 43.2% (133/308) showed at least one critical error. After two educational visits, these values fell to 8.8% (27/308); education improved the use of all included inhalers (Table ). All critical errors were reduced after repeated education. In terms of adherence, the proportion of good compliers increased after two educational sessions, from 81.6 to 87.7% (p = 0.005; Fig. ).
We compared the quality of life before (visit 1) and after (visit 3) education. The scores on the mMRC, CAT, EQ-5D, or PHQ-9 did not improve significantly (Table ). Each parameters in the EQ-5D domain did not show any significant improvement after education (Table ).
Table shows the inhaler satisfaction scores before and after education. Scores on all 10 items of the FSI-10 (all 10 items p < 0.001 respectively), and the overall score (44.36 ± 4.69 to 47.64 ± 4.08, p < 0.001), improved significantly after two educational sessions.
Of 261 COPD patients using 308 inhalers, at least one critical error during 133 (43.2%) uses at visit 1. After two “teach-back” educational sessions, those values changed 27 (8.8%), irrespective of inhaler type. The proportion of patients exhibiting good adherence also increased, as did inhaler satisfaction, but not the quality of life. So far, studies on the effects of inhaler education on quality of life is controversial. While some studies reported positive associations between inhaler educational interventions and quality of life , , – , others did not , . Two studies showing positive associations evaluated the short-term (1–3 months) effects of education , ; the other two studies enrolled only asthma patients , . In our study, the mMRC, CAT, EQ-5D, or PHQ-9 instruments revealed no relationship between education and improved quality of life. The characteristics of our population (COPD patients only), and the relatively long interval before measurement of outcomes (6 months) may explain the lack of an association between education and improved quality of life. Although certain subgroups of patients may be expected to enjoy a better quality of life after inhaler education, more research is needed to confirm this. Satisfaction with inhaler is defined as how satisfied patients’ are with their inhaler devices regarding ease and convenient to use. Inhaler satisfaction is very important part of the treatment with chronic airway diseases, enhancing both adherence and disease control , . Inhaler satisfaction differs between different inhaler devices in asthma and COPD patients , . Previous study showed that patients with asthma were significantly more satisfied with the inhaler than patients with COPD. Younger age, good disease control, previous inhaler training, and good adherence were associated with high inhaler satisfaction levels . We found that repeated education significantly improved satisfaction (on all 10 FSI-10 items) in COPD patients. Inhaler satisfaction improvement can affect various clinical outcomes in the long run. However, not much is known about the relationship between inhaler satisfaction improvement and clinical outcomes. Our study has proven the relationship between inhaler education and inhaler satisfaction, and future studies whether there is a correlation between an improved FSI-10 score and better disease control are imperative. The GOLD 2019 guidelines state that, after reviewing the symptoms and determining the dyspnea and exacerbation status, inhaler technique/adherence should be repeatedly assessed; drug potency is irrelevant if the drug is not delivered properly . Many studies found that educational interventions attenuated inhaler errors and improved adherence in patients with airway diseases , , , , , . Repeated education was the optimal approach. Most studies were performed in asthma patients , , . Some studies enrolled COPD patients , , , but most of the educational programs were brief. In three programs, three educational visits were scheduled at 2-week intervals, or according to a 1-month program , , . One study assessed changes in inhaler technique at 4–6 weeks after education . We scheduled three educational visits at 3-month intervals and analyzed the outcomes at 6 months. Our study is unique and has strength in that it included relatively long-term evaluations (6 months) after repeated education of COPD patients, and clearly shows the effectiveness of education on inhaler technique and adherence for a relatively long period (3 months) after one session of education. Critical errors were common (all inhaler types) at visit 1. Among DPI users, Turbuhaler, Breezhaler, and Genuair users made more critical errors than Diskus and Ellipta users. After two educational sessions, the critical error rate was less than 10% among the DPI users. Those using the Respimat and pMDIs made more critical errors than the DPI users at visit 1. Education decreased the initial rate of critical errors of the Respimat and pMDI users to 10%. Although the improvements differed somewhat among the devices, all critical error rates fell. One large real-world study assessed 2935 COPD patients using 3393 devices; critical errors were divided into dose preparation and delivery errors . Dose preparation errors were common in Respimat and Turbuhaler users, and dose delivery errors in Respimat and pMDI users; our findings were similar. Dose preparation errors were commonly observed in Turbuhaler users (failure to prime with the device upright, 45.0%), Breezhaler users (failure to press the button that pierces the capsule, 21.3%), Genuair users (failure to hold the inhaler horizontally for priming, 16.7%) and Respimat users (failure to twist the base by one half-turn, 19.3%). Dose delivery errors were more common in Respimat and pMDI users, and included failure to synchronize actuation and inhalation (24.8% and 25.0%, respectively) and failure to inhale slowly and deeply (26.9 and 30.0%, respectively). All critical error rates fell after two educational interventions. Our work had certain limitations. First, this was a single-center study lacking a control group, so selection bias was inevitable. Inhaler use assessment and education are essential components of COPD management. so it would have been unethical to include a control group. Therefore, we compared several parameters before and after the educational intervention. Also, 40 patients were lost to follow-up, such that the utility of the education may have been overemphasized because the lost patients might have rejected the intervention. However, the marked improvements in inhaler handling, adherence, and satisfaction that we observed emphasize that education is useful. Second, other factors known to affect quality of life in COPD, such as the type of inhaler and the comorbidities, were not included in this study, Finally, we did not explore how long the effects of education persisted; more studies are needed on this topic. This study also had several strengths. First, few such studies have been performed in Korea , ; also, we enrolled only COPD patients; COPD and asthma differ, so the effects of education may also differ between these populations. Second, we assessed many quality of life outcomes (using the mMRC, CAT, EQ-5D, and PHQ-9 instruments), as well as inhaler satisfaction (using the FSI-10), and inhaler technique and adherence. As mentioned above, few studies have explored changes in quality of life after educational interventions. And to the best of our knowledge, this is the first study to report improved inhaler satisfaction after education. Improvements in inhaler satisfaction can lead to improvements in various clinical outcomes in COPD patients over the long time. This study highlights once again the importance of repeated inhaler education. Third, our study is different from other studies in that we have assessed the effects over a relatively long period of time (6 months). An assessment of how long the effects of education last can give the answer to how often education should be implemented. Our research is unique in this respect. Fourth, we found that the inhaler usage training was highly effective to improve inhaler satisfaction, technique, and adherence in a real-world setting, and that the effects were relatively persistent. In future studies, we will seek to precisely determine how long the effects of education persist.
Repeated education delivered by an advanced practice nurse improved inhaler satisfaction, technique, and adherence. However, inhaler education did not significantly improve quality of life. More detailed studies are needed to determine the number of educational sessions required, the optimal intervals, and the duration of any benefits thus achieved.
|
Identifying immunohistochemical biomarkers panel for non-small cell lung cancer in optimizing treatment and forecasting efficacy | 103f7606-2fa4-444d-9cc2-b643214731b2 | 11562332 | Anatomy[mh] | According to the 2020 statistics, lung cancer is ranked as the second most commonly diagnosed cancer, with approximately 2.2 million new cases. Furthermore, it also accounts for the highest number of cancer-related deaths with an estimated 1.8 million deaths. Lung carcinoma account for approximately 11.4% of all cancer diagnoses and 18.0% of all cancer-related deaths . Non-small cell lung cancer (NSCLC), which constitutes over 85% of all lung cancers, is diagnosed at an advanced stage in 60% of cases, thereby losing the opportunity for surgical treatment. Nevertheless, recent advances in chemotherapy and immunotherapy have led to improved survival rates for NSCLC patients . It is worth noting that patients with NSCLC have a five-year survival rate of only 20%, as there are currently no reliable biomarkers to guide treatment decisions and assess prognosis . It is crucial to identify potential resistance mechanisms to chemotherapy and immunotherapy to avoid ineffective treatments and unnecessary costs . Understanding these mechanisms will also help determine the best therapeutic schedule. To enable personalized treatment approaches, various biomarkers are being investigated before deciding the treatment plan. The European Society for Medical Oncology recently released a list of predictive biomarkers for the diagnosis, treatment, and monitoring of non-oncogene-addicted and oncogene-addicted metastatic NSCLC . They recommend that all stage IV NSCLC cases undergo programmed cell death 1 ligand 1 (PD-L1) immunohistochemistry (IHC) testing to determine if anti-PD-1 or anti-PD-L1 immune-checkpoint inhibitors are appropriate for the patient . The results of KEYNOTE-042 showed that patients treated with pembrolizumab had a significantly longer overall survival than those treated with chemotherapy in all tumor proportion score (TPS) populations (≥ 50% hazard ratio 0.69, 95% CI 0.56–0.85, p = 0.0003; ≥20% 0.77, 0.64–0.92, p = 0.0020, and ≥ 1% 0.81, 0.71–0.93, p = 0.0018) . However, it should be noted that PD-L1 expression alone should not be the sole factor in determining whether a patient should receive monotherapy or combination therapy. Other factors, such as the patient’s preference or smoking history, must also be considered . In 2020, the Food and Drug Administration granted accelerated approval for pembrolizumab for the treatment of solid tumors with a high tumor mutational burden (TMB-H) of ≥ 10 mut/Mb, in both adult and pediatric patients . While researchers are developing more predictive biomarkers, it remains challenging to employ a combination of biological biomarkers and artificial intelligence (AI) technologies to establish innovative treatment choice models and prognostic models. In this study, we has performed a thorough examination of 140 NSCLC patients treated with chemotherapy or immunotherapy. We meticulously assessed various factors such as patients’ information and IHC data. Our objective was to create a prognostic model utilizing the patients’ fundamental information and IHC findings and to determine the most suitable therapeutic decision-making for them. Clinicopathological features of cases This investigation was approved by the Clinical Research Ethics Committee of the Second Hospital of Nanjing (2024-LS-ky004). The study was conducted in accordance with the principles of the Declaration of Helsinki. The investigation focused on collecting data from patients diagnosed with stage III or IV NSCLC who received chemotherapy or immunotherapy at The Second Hospital of Nanjing in China, from March 2020 to November 2023. All previous examinations of these individuals were reviewed, except for those who received fewer than two treatment cycles, those without IHC results, or those without an assessment of treatment. Ultimately, 140 patients were included in this study, representative of the Jiangsu region of China. Clinicopathological data and therapeutic responses were extracted from electronic medical records. To assess therapeutic responses, we utilized the Response Evaluation Criteria in Solid Tumors (RECIST version 1.1). Responses were classified into four distinct categories: complete response (CR), partial response (PR), stable disease (SD), or progressive disease (PD). The effectiveness of treatment was evaluated using computed tomography or magnetic resonance imaging. The primary objective of this study was to detect instances of PD by using an evaluation process. The endpoint of this study was PD based on this evaluation. Experimental procedure of IHC Tissue samples were stained with hematoxylin and eosin before being subjected to immunohistochemistry using the horseradish peroxidase-labelled polymer method. 3 μm sections were extracted from formalin-fixed and paraffin-embedded tissue blocks and placed on silanized slides. The sections were dewaxed in xylene and rehydrated in graded alcohol solutions, followed by antigen retrieval using the pressure cooker method in 10 mM citrate buffer (pH 6.0) for 2 min. After cooling to room temperature, slides were loaded onto the Gene Stainer system for immunohistochemical staining. All monoclonal antibodies, except PD-L1, were obtained from Guangzhou Ambipine Medical Technology Co., Ltd. PD-L1 antibody was obtained from Agilent Technology Co., Ltd. Specifically, PD-L1 has been identified as 22c3. The remaining murine antibodies used in the study were TTF-1 (8G7G3/1), P63 (plate_number_1), CK5/6 (D5&16B4), Ki67 (SP6), CK7 (OV-TL 12/30), EGFR (EP38Y), napsin A (MRQ-60), and villin (CWWB1). Semi-quantitative assessment of immunohistochemical studies was as follows: -, negative; +, less than 5%; ++, 5-50%; +++, ≥ 50% inflammatory cells. Binary classifier selection We used supervised binary classification to categorize potential therapeutic regimen as either chemotherapy or immunotherapy. NSCLC patients were randomly divided into two groups with a 7:3 ratio for training and validation. To evaluate the potential prognostic response of patients, we identified two categories of progression-free survival (PFS), namely < 180 and ≥ 180 days, through a supervised binary classification. NSCLC patients were randomly assigned to two groups in a 7:3 ratio for training and validation. A classification machine learning algorithm, the light gradient boosting machine (LightGBM) model was used. The performance of these algorithms was assessed based on the area under the curve (AUC) values and confusion matrix, which measured the accuracy, precision, recall, and f1 score. The LightGBM model, a type of machine learning algorithm used for classification, was employed to assess the performance of the algorithms. The performance of the algorithms was evaluated based on two parameters: the AUC values and a confusion matrix. The confusion matrix measures the accuracy, precision, recall, and F1 score, thereby providing a comprehensive assessment of the performance of the algorithms. Accuracy refers to the percentage of correct guesses from all available samples. [12pt]{minimal} $$\:Accuracy=\:\:100\%$$ Precision is accurate positive prediction. [12pt]{minimal} $$\:Precision=\:\:100\%$$ Recall measures the accuracy of positive state predictions. [12pt]{minimal} $$\:Recall=\:\:100\%$$ F1 score takes into account both the precision and recall. [12pt]{minimal} $$\:f1\:score=2\:$$ In mathematical equations, a true positive ( TP ) occurs when an algorithm correctly predicts a patient’s classification. On the other hand, a false positive ( FP ) occurs when the algorithm incorrectly predicts the patient’s classification. A true negative ( TN ) occurs when the algorithm accurately predicts that a patient does not belong to a certain classification. Finally, a false negative ( FN ) arises when the algorithm fails to predict whether a patient belongs to a certain classification. Statistical analysis The duration between the start of treatment and either PD or death is referred to as the PFS. For patients who did not experience PD, data were censored during the last disease assessment. Median PFS (mPFS) estimates were generated using the Kaplan-Meier method and compared using the log-rank test. All p-values were obtained from 2-sided tests and confidence intervals were calculated at the 95% level. The results were considered statistically significant at p < 0.05. This investigation was approved by the Clinical Research Ethics Committee of the Second Hospital of Nanjing (2024-LS-ky004). The study was conducted in accordance with the principles of the Declaration of Helsinki. The investigation focused on collecting data from patients diagnosed with stage III or IV NSCLC who received chemotherapy or immunotherapy at The Second Hospital of Nanjing in China, from March 2020 to November 2023. All previous examinations of these individuals were reviewed, except for those who received fewer than two treatment cycles, those without IHC results, or those without an assessment of treatment. Ultimately, 140 patients were included in this study, representative of the Jiangsu region of China. Clinicopathological data and therapeutic responses were extracted from electronic medical records. To assess therapeutic responses, we utilized the Response Evaluation Criteria in Solid Tumors (RECIST version 1.1). Responses were classified into four distinct categories: complete response (CR), partial response (PR), stable disease (SD), or progressive disease (PD). The effectiveness of treatment was evaluated using computed tomography or magnetic resonance imaging. The primary objective of this study was to detect instances of PD by using an evaluation process. The endpoint of this study was PD based on this evaluation. Tissue samples were stained with hematoxylin and eosin before being subjected to immunohistochemistry using the horseradish peroxidase-labelled polymer method. 3 μm sections were extracted from formalin-fixed and paraffin-embedded tissue blocks and placed on silanized slides. The sections were dewaxed in xylene and rehydrated in graded alcohol solutions, followed by antigen retrieval using the pressure cooker method in 10 mM citrate buffer (pH 6.0) for 2 min. After cooling to room temperature, slides were loaded onto the Gene Stainer system for immunohistochemical staining. All monoclonal antibodies, except PD-L1, were obtained from Guangzhou Ambipine Medical Technology Co., Ltd. PD-L1 antibody was obtained from Agilent Technology Co., Ltd. Specifically, PD-L1 has been identified as 22c3. The remaining murine antibodies used in the study were TTF-1 (8G7G3/1), P63 (plate_number_1), CK5/6 (D5&16B4), Ki67 (SP6), CK7 (OV-TL 12/30), EGFR (EP38Y), napsin A (MRQ-60), and villin (CWWB1). Semi-quantitative assessment of immunohistochemical studies was as follows: -, negative; +, less than 5%; ++, 5-50%; +++, ≥ 50% inflammatory cells. We used supervised binary classification to categorize potential therapeutic regimen as either chemotherapy or immunotherapy. NSCLC patients were randomly divided into two groups with a 7:3 ratio for training and validation. To evaluate the potential prognostic response of patients, we identified two categories of progression-free survival (PFS), namely < 180 and ≥ 180 days, through a supervised binary classification. NSCLC patients were randomly assigned to two groups in a 7:3 ratio for training and validation. A classification machine learning algorithm, the light gradient boosting machine (LightGBM) model was used. The performance of these algorithms was assessed based on the area under the curve (AUC) values and confusion matrix, which measured the accuracy, precision, recall, and f1 score. The LightGBM model, a type of machine learning algorithm used for classification, was employed to assess the performance of the algorithms. The performance of the algorithms was evaluated based on two parameters: the AUC values and a confusion matrix. The confusion matrix measures the accuracy, precision, recall, and F1 score, thereby providing a comprehensive assessment of the performance of the algorithms. Accuracy refers to the percentage of correct guesses from all available samples. [12pt]{minimal} $$\:Accuracy=\:\:100\%$$ Precision is accurate positive prediction. [12pt]{minimal} $$\:Precision=\:\:100\%$$ Recall measures the accuracy of positive state predictions. [12pt]{minimal} $$\:Recall=\:\:100\%$$ F1 score takes into account both the precision and recall. [12pt]{minimal} $$\:f1\:score=2\:$$ In mathematical equations, a true positive ( TP ) occurs when an algorithm correctly predicts a patient’s classification. On the other hand, a false positive ( FP ) occurs when the algorithm incorrectly predicts the patient’s classification. A true negative ( TN ) occurs when the algorithm accurately predicts that a patient does not belong to a certain classification. Finally, a false negative ( FN ) arises when the algorithm fails to predict whether a patient belongs to a certain classification. The duration between the start of treatment and either PD or death is referred to as the PFS. For patients who did not experience PD, data were censored during the last disease assessment. Median PFS (mPFS) estimates were generated using the Kaplan-Meier method and compared using the log-rank test. All p-values were obtained from 2-sided tests and confidence intervals were calculated at the 95% level. The results were considered statistically significant at p < 0.05. Patient demographics This study analyzed the data of 140 NSCLC patients, with 69 receiving immunotherapy and 71 receiving chemotherapy (Table ). None of the patients harbored targetable drivers approved by European Medicines Agency (EMA). Among those who received immunotherapy, the median age was 66 years and 73.9% of patients were male. Non-squamous cell carcinoma was the most commonly observed histological type, observed in 37 patients (53.6%), followed by squamous carcinoma in 32 patients (46.4%). At diagnosis, 25 patients (36.2%) were in stage III and 44 (63.8%) were in stage IV. Similarly, in the group that received chemotherapy, the median age was 66 years and 78.9% of the patients were male. Non-squamous cell carcinoma was observed in 42 patients (59.2%), whereas squamous carcinoma was observed in 29 patients (40.8%). At the time of diagnosis, 32 patients (45.1%) were in stage III and 39 (54.9%) were in stage IV. Therapeutic decision-making based on IHC and patient characteristics The method for identifying tumors and determining the appropriate course of treatment relies heavily on pathological examination and clinical guidelines. Nevertheless, clinicians’ expertise plays a significant role in this process. Hence, we developed a machine-learning model to provide clinicians with automated treatment recommendations. Our model uses a supervised binary classification algorithm to predict the effectiveness of immunotherapy and chemotherapy based on patient characteristics and IHC biomarker results. We utilized the LightGBM model, an integrated machine-learning algorithm, to establish the relationship between the input and output (Fig. A). LightGBM is a highly efficient gradient boosting decision tree algorithm that utilizes advanced techniques such as gradient-based one-side sampling and exclusive feature bundling to handle large datasets and feature sets with ease. The innovative histogram-based approach effectively reduces the number of split points, resulting in faster training times and improved performance . The model achieved impressive accuracy, precision, recall, and f1 scores of 82.1, 81.2, 82.1, and 81.6%, respectively. Figure B shows the AUCs for the therapeutic regimen (chemotherapy and immunotherapy) in the validation group, which were both 0.93. The algorithm also identified important markers, such as PD-L1, Ki67, p63, tumor stage, and napsin A (Fig. D). Prognostic prediction predicted by machine-learning models To distinguish between patients with good and poor prognoses, we separated them into two groups based on their PFS time (< 180 days and ≥ 180 days). By utilizing the LightGBM model and considering patient characteristics, IHC results, and therapeutic regimens, we were able to predict PFS with an accuracy rate of 82.1%, precision rate of 82.3%, and recall rate of 82.1%. The f1 score, which considers both precision and recall, also yielded a score of 82.1%. The validation group showed excellent AUC values for PFS times less than 180 days and 180 days or more, with AUC of 0.89 and 0.89, respectively, as depicted in Fig. A. Additionally, the algorithm provides critical indicators, such as Ki67, PD-L1, TTF-1, CK5/6, and age, as shown in Fig. C. To more rigorously test our model’s performance in prognostic prediction, we applied it to samples from external The Cancer Genome Atlas (TCGA) datasets. Surprisingly, when the model trained with our data was used in predicting the TCGA datasets, the accuracy rate, precision rate, recall rate and F1 scores were 96.8%, 97.0%, 96.5% and 96.7%, respectively. The AUCs for the PFS times (< 180 days and ≥ 180 days) in the validation group from TCGA datasets are both 0.98 (Fig. D). Figure E displays the confusion matrix computed in validation cohort, which further demonstrates how LightGBM model was able to accurately predict each class. Analysis of vital biomarkers According to the LightGBM model, IHC results have the potential to guide treatment decisions and can serve as prognostic markers. In light of this finding, we focused our efforts on the critical biomarkers identified in Figs. C and C. The survival rates of the patients who underwent chemotherapy or immunotherapy were not significantly different (Fig. ). Nonetheless, patients with both squamous and non-squamous cell cancers were more likely to choose immunotherapy if PD-L1 was highly expressed. Figure A and B show the immunohistochemical expression results of the chemotherapy and immunotherapy groups, respectively. This observation was statistically significant, with p-values below 0.001 (Fig. C). For our patients with PD-L1 ≥ 50%, 95% of them chose immunotherapy. And for patients with PD-L1 < 50%, 70% of them chose chemotherapy. Furthermore, patients who received immunotherapy and had PD-L1 TPS levels ≥ 50% had a longer mPFS of 470 days versus 180 days for those with levels < 50%, with a p-value of 0.002 (see Fig. D). Regarding TTF-1, non-squamous NSCLC patients had high expression levels (Fig. A and B ), which did not affect clinical decisions, as depicted in Fig. C. However, among non-squamous cell cancer patients, those who were TTF-1 positive had a longer mPFS of 550 days compared to TTF-1 negative patients with only 110 days. This difference was statistically significant, with a p-value less than 0.001, as illustrated in Fig. D. Meanwhile, squamous NSCLC patients had high expression levels of p63 and CK5/6 in Fig. . It was noted that patients with squamous cell carcinoma who exhibited high levels of p63 expression had a significantly longer mPFS of 410 days, compared to those with negative expression who had an mPFS of only 100 days. This difference was statistically significant with a p-value of less than 0.001, as shown in Fig. B. Additionally, patients with squamous cell carcinoma showed higher levels of CK5/6 expression, as shown in Fig. C. Among these patients, those with medium positive and strong positive expression of squamous carcinoma had a significantly longer mPFS of 550 days, compared to those who were weakly positive and had an mPFS of only 160 days. This difference was also statistically significant with a p-value of 0.0007, as depicted in Fig. D. Next, we found that CK7 and napsinA were highly expressed in non-squamous carcinoma patients. However, our PFS analysis did not reveal any significant differences, as evidenced by Fig. and . Similarly, there was no variation in Ki67 and Villin expression, as depicted in Fig. and . Our research indicated that patients in clinical phase III had a longer mPFS of 590 days, compared to 220 days for clinical phase IV ( p < 0.001, Fig. A). Additionally, we observed that patients with low differentiation had poorer prognosis (240 vs. 550 days, p = 0.002, Fig. B). Finally, we did not identify any statistically significant correlations among PFS, age, and tumor type, as shown in Fig. C and D. Combined diagnosis by IHC panel Through analysis using the lightGBM model, we successfully identified six key biomarkers that formed a unique detection panel. This panel can predict the optimal therapeutic regimen and PFS. PD-L1, TTF-1, P63, CK5/6, disease stages, and differentiation degree were the identified biomarkers, and the heatmap in Fig. illustrates the differences in the expression of these biomarkers between different groups. PD-L1, in particular, can effectively guide the selection of treatment plans, as patients with high PD-L1 expression are more likely to benefit from immunotherapy. Moreover, higher expression levels of PD-L1 in the immunotherapy group were associated with longer PFS, indicating better treatment outcomes. Higher expression levels of TTF-1 and CK5/6 predicted better therapeutic outcomes. Disease stages and differentiation are relatively well understood, with stage III patients having a better prognosis than those with stage IV disease (Fig. A), whereas patients with low differentiation have a worse prognosis (Fig. B). This study analyzed the data of 140 NSCLC patients, with 69 receiving immunotherapy and 71 receiving chemotherapy (Table ). None of the patients harbored targetable drivers approved by European Medicines Agency (EMA). Among those who received immunotherapy, the median age was 66 years and 73.9% of patients were male. Non-squamous cell carcinoma was the most commonly observed histological type, observed in 37 patients (53.6%), followed by squamous carcinoma in 32 patients (46.4%). At diagnosis, 25 patients (36.2%) were in stage III and 44 (63.8%) were in stage IV. Similarly, in the group that received chemotherapy, the median age was 66 years and 78.9% of the patients were male. Non-squamous cell carcinoma was observed in 42 patients (59.2%), whereas squamous carcinoma was observed in 29 patients (40.8%). At the time of diagnosis, 32 patients (45.1%) were in stage III and 39 (54.9%) were in stage IV. The method for identifying tumors and determining the appropriate course of treatment relies heavily on pathological examination and clinical guidelines. Nevertheless, clinicians’ expertise plays a significant role in this process. Hence, we developed a machine-learning model to provide clinicians with automated treatment recommendations. Our model uses a supervised binary classification algorithm to predict the effectiveness of immunotherapy and chemotherapy based on patient characteristics and IHC biomarker results. We utilized the LightGBM model, an integrated machine-learning algorithm, to establish the relationship between the input and output (Fig. A). LightGBM is a highly efficient gradient boosting decision tree algorithm that utilizes advanced techniques such as gradient-based one-side sampling and exclusive feature bundling to handle large datasets and feature sets with ease. The innovative histogram-based approach effectively reduces the number of split points, resulting in faster training times and improved performance . The model achieved impressive accuracy, precision, recall, and f1 scores of 82.1, 81.2, 82.1, and 81.6%, respectively. Figure B shows the AUCs for the therapeutic regimen (chemotherapy and immunotherapy) in the validation group, which were both 0.93. The algorithm also identified important markers, such as PD-L1, Ki67, p63, tumor stage, and napsin A (Fig. D). To distinguish between patients with good and poor prognoses, we separated them into two groups based on their PFS time (< 180 days and ≥ 180 days). By utilizing the LightGBM model and considering patient characteristics, IHC results, and therapeutic regimens, we were able to predict PFS with an accuracy rate of 82.1%, precision rate of 82.3%, and recall rate of 82.1%. The f1 score, which considers both precision and recall, also yielded a score of 82.1%. The validation group showed excellent AUC values for PFS times less than 180 days and 180 days or more, with AUC of 0.89 and 0.89, respectively, as depicted in Fig. A. Additionally, the algorithm provides critical indicators, such as Ki67, PD-L1, TTF-1, CK5/6, and age, as shown in Fig. C. To more rigorously test our model’s performance in prognostic prediction, we applied it to samples from external The Cancer Genome Atlas (TCGA) datasets. Surprisingly, when the model trained with our data was used in predicting the TCGA datasets, the accuracy rate, precision rate, recall rate and F1 scores were 96.8%, 97.0%, 96.5% and 96.7%, respectively. The AUCs for the PFS times (< 180 days and ≥ 180 days) in the validation group from TCGA datasets are both 0.98 (Fig. D). Figure E displays the confusion matrix computed in validation cohort, which further demonstrates how LightGBM model was able to accurately predict each class. According to the LightGBM model, IHC results have the potential to guide treatment decisions and can serve as prognostic markers. In light of this finding, we focused our efforts on the critical biomarkers identified in Figs. C and C. The survival rates of the patients who underwent chemotherapy or immunotherapy were not significantly different (Fig. ). Nonetheless, patients with both squamous and non-squamous cell cancers were more likely to choose immunotherapy if PD-L1 was highly expressed. Figure A and B show the immunohistochemical expression results of the chemotherapy and immunotherapy groups, respectively. This observation was statistically significant, with p-values below 0.001 (Fig. C). For our patients with PD-L1 ≥ 50%, 95% of them chose immunotherapy. And for patients with PD-L1 < 50%, 70% of them chose chemotherapy. Furthermore, patients who received immunotherapy and had PD-L1 TPS levels ≥ 50% had a longer mPFS of 470 days versus 180 days for those with levels < 50%, with a p-value of 0.002 (see Fig. D). Regarding TTF-1, non-squamous NSCLC patients had high expression levels (Fig. A and B ), which did not affect clinical decisions, as depicted in Fig. C. However, among non-squamous cell cancer patients, those who were TTF-1 positive had a longer mPFS of 550 days compared to TTF-1 negative patients with only 110 days. This difference was statistically significant, with a p-value less than 0.001, as illustrated in Fig. D. Meanwhile, squamous NSCLC patients had high expression levels of p63 and CK5/6 in Fig. . It was noted that patients with squamous cell carcinoma who exhibited high levels of p63 expression had a significantly longer mPFS of 410 days, compared to those with negative expression who had an mPFS of only 100 days. This difference was statistically significant with a p-value of less than 0.001, as shown in Fig. B. Additionally, patients with squamous cell carcinoma showed higher levels of CK5/6 expression, as shown in Fig. C. Among these patients, those with medium positive and strong positive expression of squamous carcinoma had a significantly longer mPFS of 550 days, compared to those who were weakly positive and had an mPFS of only 160 days. This difference was also statistically significant with a p-value of 0.0007, as depicted in Fig. D. Next, we found that CK7 and napsinA were highly expressed in non-squamous carcinoma patients. However, our PFS analysis did not reveal any significant differences, as evidenced by Fig. and . Similarly, there was no variation in Ki67 and Villin expression, as depicted in Fig. and . Our research indicated that patients in clinical phase III had a longer mPFS of 590 days, compared to 220 days for clinical phase IV ( p < 0.001, Fig. A). Additionally, we observed that patients with low differentiation had poorer prognosis (240 vs. 550 days, p = 0.002, Fig. B). Finally, we did not identify any statistically significant correlations among PFS, age, and tumor type, as shown in Fig. C and D. Through analysis using the lightGBM model, we successfully identified six key biomarkers that formed a unique detection panel. This panel can predict the optimal therapeutic regimen and PFS. PD-L1, TTF-1, P63, CK5/6, disease stages, and differentiation degree were the identified biomarkers, and the heatmap in Fig. illustrates the differences in the expression of these biomarkers between different groups. PD-L1, in particular, can effectively guide the selection of treatment plans, as patients with high PD-L1 expression are more likely to benefit from immunotherapy. Moreover, higher expression levels of PD-L1 in the immunotherapy group were associated with longer PFS, indicating better treatment outcomes. Higher expression levels of TTF-1 and CK5/6 predicted better therapeutic outcomes. Disease stages and differentiation are relatively well understood, with stage III patients having a better prognosis than those with stage IV disease (Fig. A), whereas patients with low differentiation have a worse prognosis (Fig. B). Clinical trials have shown promise in the use of chemotherapy and immunotherapy in the treatment of NSCLC. However, not all patients experience a long-term progression-free survival or overall survival. The accurate prediction of the most effective treatment plan and prognosis is crucial. Tumor tissue biomarkers play critical roles in this process. In our study, we analyzed IHC data from 140 patients with NSCLC using the lightGBM model. We correlated the IHC data with the treatment regimen and prognosis to identify key biomarkers, including PD-L1, TTF-1, P63, CK5/6, disease stages, and differentiation degree. Our study successfully identified these biomarkers using AI-powered assistance. Detection of PD-L1 expression through IHC is a valuable tool for predicting the response to anti-PD-1 or anti-PD-L1 antibodies in patients with different types of tumors . This predictive biomarker can be found in both immune and tumor cells. PD-1 negatively regulates the immune response against tumors, and its expression allows tumor cells to avoid immune surveillance. Clinical trials have evaluated the predictive capacity of PD-L1 expression on tumor cells using TPS, which is defined as the percentage of viable tumor cells with partial or complete membranous PD-L1 staining relative to all viable tumor cells present in the sample. Our phase III/IV study on NSCLC patients confirmed that PD-L1 has predictive value, with patients having a TPS of ≥ 50% experiencing longer PFS than those with a TPS of < 50%. Thyroid transcription factor-1 (TTF-1) is a critical regulator of genes specific to certain tissues. It is primarily present in type 2 alveolar epithelial cells and plays a vital role in the development and differentiation of the lungs, thyroid gland, and forebrain . TTF-1 overexpression is a favorable prognostic factor for patients with non-squamous NSCLC , as demonstrated by our results in Fig. D. Studies have shown that TTF-1 is positive in over 90% of lung adenocarcinomas with EGFR mutations, with higher sensitivity observed in non-smokers and males . However, TTF-1 positivity is not exclusive to EGFR mutations, and our findings indicate that only 20.8% of TTF-1 positive tumors harbor EGFR mutations . Therefore, when evaluating the value of TTF-1 IHC as a screening tool for EGFR mutations, negative predictive value (NPV) becomes more important than positive predictive value. Our TTF-1 IHC results showed an NPV of 82.3% for EGFR mutations. The p63 protein is a member of the p53 family of nuclear transcription factors and is recognized for its diverse abilities, including transactivating reporter genes, inducing apoptosis, and acting as a dominant-negative agent . Our research findings demonstrate that p63 possesses significant prognostic value in patients with squamous NSCLC. CK5/6 is a widely used immune marker for identifying lung squamous cell carcinoma. Typically expressed in squamous cells, ductal epithelial basal cells, myoepithelial cells, and mesothelial cells under normal circumstances, its expression in lung squamous cell carcinoma ranges from 75 to 100%. Notably, high CK5/6 expression in squamous NSCLC is linked to longer PFS compared with low expression. In summary, our team implemented a highly accurate model to predict the therapeutic regimen and prognosis of NSCLC patients. Using combined IHC biomarkers, we were able to achieve an impressive accuracy rate of 82.1% when predicting treatment regimens, as well as PFS with the same regimen. When the prediction was applied to the external TCGA datasets, the accuracy reached 96.8%. Our study identified six critical biomarkers that form an exclusive detection panel for NSCLC. One interesting finding is that PD-L1 TPS expression levels can guide the decision to use chemotherapy or immunotherapy. Furthermore, in the immunotherapy group, high or low expression of this indicator could predict prognosis with a cutoff value of 50%. While TTF-1 does not affect clinical decisions, patients who are positive for this marker have a longer PFS in non-squamous NSCLC. We also discovered that p63 was highly expressed in squamous cell carcinoma, and squamous cell carcinoma patients with positive p63 expression had longer PFS. Positive CK5/6 expression in patients with squamous carcinoma also indicates better treatment outcomes. Our team also conducted a detailed analysis of other markers, such as CK7, napsin A, Ki67, Villin, clinical stages, differentiation degree, age, and tumor histological features. These biomarkers have both independent analysis results and joint analysis results, and our findings make up our unique detection panel for NSCLC patients, allowing personalized treatment regimens to be developed and avoiding unnecessary over- and under-treatment. In the next step of our plan, extensive research is still needed, including larger patient populations, prospective clinical studies, and mechanistic analysis of biomarkers. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Current treatment options for cluster headache: limitations and the unmet need for better and specific treatments—a consensus article | 9ed29fa2-befb-4ded-8d05-2dccb93220fa | 10476341 | Preventive Medicine[mh] | Cluster headache (CH) is the most common trigemino-autonomic cephalalgia . The recurrent attacks up to 8 times per day are among the most severe pains described by humans, succeeding gun-shot wounds, giving birth and kidney stones . The life-time prevalence is 1,24/1000 and the typical age of onset is 20–40 years . Existing treatments for CH have originally been developed for other medical conditions and are based on empirical data . The three existing European guidelines for the management of CH are based on very few and small studies mostly not fulfilling modern standards: The 2023 European Academy of Neurology Guidelines on Cluster Headache, the 2006 European Federation of Neurological Societies (EFNS) guidelines on the treatment of CH and other Trigemino-autonomic cephalalgias (mainly for Neurologists) and the European Headache Federations (EHF) guidelines for headache disorders (mainly for Primary Care Physicians) . In addition, national guidelines exist. Therefore, the aim of this paper is to provide insights into the unmet need for safe and tolerable CH preventive medication from the perspective of people with CH and society, headache specialist and cardiologist. To do this, we review and discuss existing treatment possibilities for CH. Neurostimulation and future perspectives are also discussed in this consensus paper arising from some of the major CH clinics and research centers in Europe. From the perspective of patient and society ”I’d rather give birth than endure a cluster headache attack” CH has an impact on all aspects of peoples’ lives including higher proportions of multimorbidity of somatic and psychiatric diseases . In a recent interview-based Danish study on personal and economic burden, 92% of people with episodic CH (ECH) in bout, 98% of people with chronic CH (CCH) and even 15% people with ECH in remission reported to be restricted in their everyday lives . People with CH do not present with any physical handicaps, hindering understanding from family, friends and colleagues . Overall, the disease mainly affects the younger half of the population where careers and family lives are being established, and in 21% and 48% of people with ECH and CCH, CH led to dependency on family and friends . In clinical experience, family members report feeling helpless and afraid because people with CH may get irritable or aggressive as part of their attacks or might even become self-harming. The mean diagnostic delay, although decreasing, is 6 years, during which patients are therapeutically mismanaged . Misdiagnosis is seen in 49% of people with CH most often with migraine, tension-type headache and sinusitis and removal of a healthy tooth has been reported in 15–43% . Females are misdiagnosed more frequently than males with suggested reasons being pre-assumptions of women having migraine, a lower male:female ratio in chronic patients and differences in the clinical presentation . Self-rated health is strongly associated with mortality, making it an important instrument when investigating the burden of a disease . Self-rated health is significantly reduced in ECH and in CCH the odds are tenfold lower of rating their health as ‘good’ or ‘very good’ compared to matched controls . Co-existing depression and anxiety also occur more frequently in people with CH compared to controls . Suicidal thoughts are reported by 47–55% and attempts by 1.3–2% of people with CH . “Although I’m not in bout, I still fear attacks every day” Growing evidence indicates that people with ECH are both physiologically and psychologically impacted even in remission periods, implying that ECH is also a chronically disabling disease despite the cyclic nature. Self-rated quality of life is significantly lower in people with ECH during remission compared with healthy controls perhaps partly explained by worrying and avoidance behavior towards potential triggers In addition, observational studies suggest that parameters such as sleep and other hypothalamic functions are altered not only during the active disease phase but also up to one year in remission i.e., the attacks might be regarded as “the tip of the iceberg” in relation to pathophysiological activity. An extensive epidemiological study from Sweden has shown that twice as many people with CH than age, sex and demographically matched controls were on disability pension (10.3% vs. 5.8%) which is in agreement with previous studies . The number of sick-days are also significantly higher for people with CH than controls . The economic burden of CH is significant and a call for attention . The direct costs of medication and healthcare services in a Danish study sum up to 5.178 euros per patient per year (CCH 9,158 euros), mainly due to acute medication and hospital admittance . A German study from 2011 found direct costs per patient to be 4,737 euros for half a year and including indirect costs the yearly costs amount to 11,739–11,926 euros per patient . Although not directly comparable (reports costs per bout), An Italian study found that the total cost of a CH bout was €4398 per patient and total cost of CCH was 5.4 times higher than ECH (€13,350) . From the perspective of a headache specialist Treatment is initiated using a trial-and-error approach, and the close follow-up required is challenging in most health-care settings due to organizational and resource limitations. It is a significant limitation in CH treatment that existing guidelines are based on very few and small studies not fulfilling modern standards. They overall agree on first and second choice treatments but vary on recommended dosages and electrocardiogram (ECG) monitoring intervals. When exceeding the first- and second-line options, the evidence is even more sparse. The treatment of CH can be divided into three categories 1) acute treatment aiming to abort the single attacks, 2) preventive treatment that taken at regular intervals aims to lower attack frequency and pain intensity, and lastly 3) transitional treatment that can be used as a short-lasting preventive if bouts are short or, more often, to obtain a “bridging” effect in the period a preventive is titrated to its therapeutic threshold (Fig. ). The goal must always be to suppress attacks with preventives minimizing the need for acute treatment. Acute treatment Treatment with 100% oxygen and triptans are the cornerstones of acute CH treatment and it is recommended to prescribe both. Simple analgesics and opioids are not effective . In addition, an inappropriate use of opioids increases the risk of substance abuse. Oxygen One large randomized, double-blind, placebo-controlled crossover study with 109 participants showed that 78% inhaling 100% oxygen were pain-free or reported an adequate effect after 15 min compared to 20% receiving air . These findings were confirmed in a small double-blinded cross-over study with 19 participants and in an open-label study in 33 episodic- and 19 chronic participants . An international survey covering 56 countries (23% of responders were from Europe) found that more than half of the participants reported use of oxygen in CH to be “very effective” or give “complete” remission . Data from the Danish CH Survey similarly found that 75% had a 50% response to oxygen . Oxygen is generally safe and without side-effects, however, it is unhandy to carry around and use multiple times a day outside of the residence. Furthermore, availability differs between countries. Oxygen is fully reimbursed (or with minor restrictions) in only 12 countries accounting for 63% of the European population . Triptans Triptans are easy to carry along, but they are costly and official guidelines limit the use to twice per day. However, based on an individual assessment and lack of options, many people with CH exceed this limit in agreement with their neurologist. Triptans are contraindicated in people with certain cardiovascular diseases as the vasoconstrictive effect has been theorized to increase the risk of stroke and acute myocardial infarction. Again, people may be so burdened that usage may still be offered after thorough information. The administration route affects efficacy. Subcutaneous injectable sumatriptan has shown to induce complete pain freedom in 20 min in 75% of participants; sumatriptan nasal spray induced pain freedom in 47% versus 18% for placebo at 30 min; in episodic participants, oral zolmitriptan 10 mg induced meaningful pain reduction in 47% versus 29% for placebo; and the effect of nasal zolmitriptan 5 and 10mg within 30 min was 40% and 62% . Oral formula is generally not recommended due to its slower effect but may be the only available treatment in many countries. Triptans are reimbursed completely or with minor restrictions in 16 European countries, representing 66% of the population . However, it is the authors’ experience that people on social support still find it difficult to pay for injectable/nasal sumatriptan. Overall, oxygen and triptans are effective, however, the major problem lies with the high number of daily CH-attacks which necessitates too high (off-label) daily intake of triptans, with non-responders, patients with limited access to the medication (not available or too expensive), or patients with cardiovascular comorbidities. These people may end up with a problematic use of opioids or illegal drugs . In pregnancy and during breastfeeding, treatment with oxygen is considered safe, recommendations on use of sumatriptan are varying, from limited use to no use . Preventive treatment Preventive treatment is the cornerstone of CH management in order to suppress or limit the extreme pain attacks. Even for people with effective acute treatment the effect is not instant. Therefore, it is recommended, but not evidence based, that people with ECH start preventive treatment as soon as attacks are emerging and to slowly taper off after two weeks without attacks (allowing for swift increase again if attacks reemerge). In CCH there is need for continuous prevention. The existing treatment recommendations are based on small and low level of evidence studies (listed in Table ). This would not necessarily be a problem if clinical experience was that they were well tolerated and effectful, however, this is not the case. We will review the existing literature on the three major preventive treatment options in CH: verapamil, lithium and topiramate. Verapamil The understanding of its mechanism of action in CH remains unclear. Among the suggested are vasospasm inhibition , GABA-A inhibition , circadian rhythm modulation , and hyperpolarization-activated cyclic nucleotide-gated channel mediated decreasing of parasympathetic activity . The rationale for using verapamil as a first-line preventive treatment is based on two randomized controlled trials (RCTs) and three open label studies. The first double-blind cross-over RCT lasted 23 weeks comparing verapamil 360 mg/day with lithium 900 mg/day in CCH (no placebo group). Only 50% on verapamil and 37% on lithium experienced a reduction in an unspecified headache index . It remains unknown what the index included, however, the study established the foundation for verapamil treatment. The second study randomized 30 people with ECH to either 14 days with verapamil 360 mg/day or placebo. In the second week, 80% on verapamil reported 50% or higher reduction in attack frequency compared with 0% on placebo, however, just 27% became attack free . An open-label study from 1983 tested several potential drug targets, including verapamil 160–720 mg/day finding an unspecified effect in five participants with CCH . This was followed by another open-label study from 1989 with 33 episodic- and 15 chronic participants receiving verapamil 240–1200 mg/day. Improvement was seen after an average of 1.7 weeks and 5 weeks for ECH and CCH, respectively. As many as 69% reported a more than 75% effect . In the largest open-label study of 52 episodic- and 18 chronic participants, the authors aimed to personalize timing and dosage of verapamil. With verapamil 360–920 mg/day 94% episodic- and 56% chronic participants became attack free. Based on previous bout length and intensifying clinical presentation, people with ECH were included if expected bout length exceeded a few days . The major limitation of the open label studies is the lack of a control group as the level of placebo response and spontaneous remission is unknown. For CH there is an extreme variability in bout length from bout to bout and as many as 20% may change phenotype over time . In sharp contrast to these reports, new data from 400 consecutively recruited Danish persons with CH show that only 44% episodic- and 34% chronic participants reported a reduction in attack frequency more than 50%. Only 14% reported complete relief of attacks on verapamil . Efficacy does not seem to be associated with high/low dose or sex, but verapamil is more effective in ECH compared to CCH . Side effects such as constipation, tiredness and oedema are reported in 12–86% of participants . Overall, it is the authors’ experience, that people with CH are willing to accept substantial side effects if they experience some relief. Also, we will dispute that most have side effects on effective dosages of verapamil, but only very limited evidence of this is available. A whole range of cardiac contra-indications exist for verapamil including untreated 2 nd and 3 rd degree atrioventricular block, bradycardia and heart failure, severe hypotension, and Wolff-Parkinson-White-Syndrome. In addition, the list of possible interactions is long including frequently used medications as atorvastatin (increased risk of myopathy and rhabdomyolysis), domperidone (risk of prolonged QT interval), clopidogrel (decreased antiplatelet effect), fluconazole (increased verapamil exposure) and lithium (neurotoxicity and bradycardia) . Lithium Lithium is recommended as second line treatment, but is most suited for people with CCH and has several limitations, discussed in the following section. Slowly titrating to a dose of 600-1500mg daily with serum levels between 0.6 and 0.8 is considered optimal. It is speculated that treatment with lithium was initiated due to the cyclic nature of the disease (as is the case for bipolar disease). Only two randomized controlled studies exist, one showing no effect in 27 episodic participants (note only 1 week of treatment) and one comparing lithium treatment with verapamil showing that 37% of chronic participants experienced a reduction in an unspecified CH index . The effect of lithium has been reviewed since 1981 where several open label studies indicated a good effect, particularly in CCH . The main problem with lithium is the high degree of acute and long-term side effects and that treatment requires frequent blood samples to titrate the dosage according to the therapeutic index, to avoid toxic serum levels and to control for adverse effects on kidney, liver and thyroid function . This is burdensome for both the patient and the health-care system. Topiramate Is also recommended as second line treatment and may be used in both ECH and CCH. The anticonvulsant drug is recommended in doses of 50–200 mg daily by the 2019 EHF guidelines . The preventive effect of topiramate has been investigated in two small open label studies with 13 and 33 participants with conflicting results. The largest study found more than 50% reduction in only 21% of participants (study period of 20 days) and the smallest showed that 75%, mostly episodic participants, went into remission . The benefit of topiramate is that cardiac monitoring is not required. However, depression is a known side-effect; especially in people with pre-existing depressive symptoms, which is reported in up to 67% of people with CH . Other prominent side-effects, often leading to discontinuation in the clinic, are cognitive impairment and paresthesia. Topiramate cannot be used in people with kidney stones. Other preventive options In treatment refractory patients, it may be necessary to try medical treatment with even lower level of evidence either as monotherapy or as add-on. Treatment with 10 mg oral melatonin was shown effective in a small RCT in 5 out of 10 people with ECH compared to none in the placebo group . In a small case–control study, mainly with chronic patients, no effect was observed . OnabotulinumtoxinA may have some additional effect in treatment refractory CCH , however, evidence is sparce and the pathophysiological mechanism behind an effect using a migraine protocol is uncertain. Although not available in most European countries, it is worth to mention that retrospective chart review has suggested a possible effect of short-term (3–5 days in hospital) intravenous treatment of dihydroergotamine and 1–2 mg ergotamine without caffeine given at night may also prevent nightly attacks and nausea prevented in advance . LSD and Psilocybin Use of illicit drugs like psilocybin, lysergic acid diethylamide (LSD) and gamma-hydroxybutyrat (GHB) are more frequently reported by people with CH compared to the general population . Several retrospective surveys and case reports indicate that psilocybin and LSD/the non-halluzinogenic bromo-LSD in some cases may abort attacks and extend the duration of remission periods . An explorative RCT with 17 episodic and chronic participants investigated microdosing of psilocybin finding no difference between groups in efficacy and side effects . As with other medical treatment, RCTs are needed to evaluate efficacy and safety before treatment can be recommended. Transitional treatment Current preventive medications need to be titrated up to an effective dosage, and an intermediate treatment consisting of corticosteroids can therefore be applied if patients are burdened by many attacks . The European 2019 EHF guidelines define treatment with either prednisone taken orally or given as a greater occipital nerve block as intermediate treatment, whereas the 2023 European Academy guidelines and the 2006 EFNS guidelines include them under preventive treatment . Prednisone The exact effect mechanism is poorly understood, but oral corticosteroids have been suggested to attenuate trigeminal activation and counteract hypothalamic dysfunction . A multi-centre, double-blind, RCT from 2021 showed a fast onset of 100 mg prednisone in 118 episodic participants with 7.1 attacks compared with 9.5 in the placebo group within the first week . Two studies from 1978 and 1975; the first a case-series in 19 participants showed that 58% became attack free with 10–80 mg prednisone daily for 3–10 days and the second, a double-blind single cross-over study, also indicated efficacy . Short-term use is considered effective and safe (although people on rare occasions may develop psychiatric symptoms), but continuous use may increase the risk of known systemic side effects of prednisone (opportunistic infections, hypertension, osteoporosis and metabolic diseases such as type 2 diabetes). Greater occipital nerve (GON) blocks The effect is thought to occur through a modulatory effect on the nociceptive processing in trigeminal neurons via the trigemino-vascular system . Two double-blind RCTs exist. The first investigated three injections of cortivazol in 1 week in 28 episodic- and 15 chronic participants. Two to four days after the third injection, 95% in the active group had two or less daily attacks compared to 55% in the placebo group. Attack frequency was also reduced to one third of that in the placebo group in the first 15 days . Attack-freedom was seen in 85% of 16 episodic- and 7 chronic participants one week after a single dosage of betamethasone compared with none receiving placebo . GON blocks has shown higher efficacy in episodic participants compared to chronic in a prospective open-label study . Most clinics use 2.5 mL betamethasone (rapid and long acting) plus 0.5 mL lidocaine 2% sc ipsilateral to the pain. Side effects of short term and long-term use are equal to oral use. Having this in mind, in the represented clinics of this paper, injections in 3 months intervals are considered safe. Repeated nerve blocks in medically refractory people with CCH led to transient attack freedom in only one third . GON blocks are generally accepted for usage in pregnant and breastfeeding women . Oral Triptans with longer half-live time Although there is no evidence from clinical trials, it is the authors clinical experience that frovatriptan and naratriptan may be used for transitional prophylaxis in cases where GON blocks are inefficient or contraindicated or as a short term mini-preventive in people with several nightly attacks and short bouts . From the perspective of a cardiologist People with CH have a high burden of cardio- and cerebrovascular (CVD) risk factors, including high body mass index (for males) and smoking, which is reported by 48–68% of patients in recent publications . These factors are known to increase the risk of CVD. Cross-sectional studies have shown that overall multimorbidity including CVD occur more frequently in people with CH than in matched controls . Therefore, use of triptans, verapamil and lithium may be worrisome. Triptans have an extracranial vasoconstrictive effect and are relatively contraindicated in people with known CVD. Although prescribers seem to take this into account, a novel Italian study showed that 4% of male patients were treated with triptans despite having a CVD . Retrospective data on CH patients with more than two daily dosages have not reported serious adverse events, however official guidelines still limit the daily use to two. With a daily attack frequency of up to eight, triptans can seldomly stand alone. As discussed above, verapamil is the first-line preventive medication. The highest recommended daily dose in cardiology is 480 mg and combination with betablockers are not recommended due to risk of atrioventricular block. ECG should be assessed before initiation - each time and before increasing above 400 mg, 600 mg, 800 mg and 1000 mg. In patients treated with higher dosages than 480 mg, an annual ECG is recommended and in case of sinus bradycardia, 1. degree AV block or symptoms as syncope, fatigue or dizziness, Holter should be performed. From clinical experience, patients often forget ECG controls when initiating treatment when a new bout begin or if increasing the dosage. This concerning issue is also the finding in audit data on 217 English CH participants on verapamil showed that 41% received verapamil treatment without an ECG and among those with, 19% had arrhythmias, with prolonged PR interval being the most frequent . Lithium is known to induce benign ECG alterations and near fatal arrhythmias but may also have cardioprotective potential. At therapeutic lithium levels, T-wave depressions and sinus node dysfunction are the most common ECG findings. Arrhythmias are mainly noticed with high serum lithium. A baseline ECG is recommended and in case of elevated serum lithium levels or symptoms of arrhythmias, a novel ECG or Holter monitoring is needed. Future perspectives An era of new specifically targeted treatments with few side effects are emerging in the headache field. Anti-cGRP therapy cGRP antibodies are the first targeted medical treatment possibilities in CH based on a pathophysiological understanding of the disease . CGRP plasma levels increases during spontaneous and nitroglycerin induced attacks and were reduced to baseline levels after spontaneous, sumatriptan- and oxygen-induced termination . Further, a double-blind RCT found that cGRP infusion triggered attacks in people with chronic and active episodic CH, but not in remission . In a phase III RCT, a 50% attack reduction was seen in 71% of episodic participants treated with galcanezumab vs. 53% treated with placebo and mean weekly attack frequency across weeks 1 through 3 was significantly reduced with 40% . Of most importance, cGRP antibodies are generally very well tolerated with few side effects. On this basis, galcanezumab was approved for the treatment of ECH in the US and Canada but not in Europe as the European Medical Agency found effect and evidence to be too sparce . A more recent Korean open label study on 240 mg galcanezumab in ECH supports the findings of the RCT . Galcanezumab did not meet the primary or secondary efficacy endpoint in CCH . Studies on fremanezumab in ECH and CCH were aborted, as futility analyses concluded that the primary endpoints were unlikely to be met and the recent study in ECH with eptinezumab has stopped further inclusion after futility analyses. There is an ongoing open label trial with eptimezumab for CCH and erenumab for CCH . Recently, recommendations on optimal RCT design in ECH and CCH have emerged . Neuromodulation and invasive procedures Neuromodulation has become an emerging and viable treatment option for medically treatment refractory CCH patients e.g. treatment failure of three preventive drugs . Despite being restricted to a minority, invasive and very costly, neurostimulation greatly reduces patient burden and subsequently both indirect and direct healthcare costs . In extremely severe cases, deep brain stimulation has been described in case series, but proper trials into efficacy, safety and the optimal stimulation target are lacking . After several case series the ICON (intractable chronic cluster headache) trial provided evidence for the efficacy of occipital nerve stimulation (ONS) in an international, multicenter phase 3 RCT. In the 131 chronic participants, mean attack frequency was reduced from 15.8 weekly attacks to 7.4 during the one-year study period for both high and low electrical dose. ONS is now reimbursed for medically intractable CCH in several European countries. There are two RCTs investigating sphenopalatine ganglion stimulation (SPG) versus sham stimulation in CCH as acute treatment finding a 10% difference achieving pain-freedom in 15 min versus sham . Long-term open label studies found that 33% experienced a preventive effect and that 78% of attacks were successfully treated with SPG and 74% of participants with CCH could reduce or remain off all preventive medication when using SPG stimulation, however treatment is currently unavailable . Navigation guided botulinumtoxin injections targeting the SPG is currently being investigated in a multinational RCT as pilot data have indicated safety and efficacy in CCH . There are three randomized trials assessing vagus nerve stimulation (NVS) as both acute and preventive treatment . As acute treatment, there was no difference in pain freedom after 15 min between NVS and sham. For prevention, the open-label study in CCH found a significant reduction of 4 weekly attacks in the NVS group versus sham. The 50% responder rate was 40% in the NVS versus 8% in the sham group . NVS seems a viable but fairly costly option in those patients who are unresponsive or have a contraindication against triptans. ”I’d rather give birth than endure a cluster headache attack” CH has an impact on all aspects of peoples’ lives including higher proportions of multimorbidity of somatic and psychiatric diseases . In a recent interview-based Danish study on personal and economic burden, 92% of people with episodic CH (ECH) in bout, 98% of people with chronic CH (CCH) and even 15% people with ECH in remission reported to be restricted in their everyday lives . People with CH do not present with any physical handicaps, hindering understanding from family, friends and colleagues . Overall, the disease mainly affects the younger half of the population where careers and family lives are being established, and in 21% and 48% of people with ECH and CCH, CH led to dependency on family and friends . In clinical experience, family members report feeling helpless and afraid because people with CH may get irritable or aggressive as part of their attacks or might even become self-harming. The mean diagnostic delay, although decreasing, is 6 years, during which patients are therapeutically mismanaged . Misdiagnosis is seen in 49% of people with CH most often with migraine, tension-type headache and sinusitis and removal of a healthy tooth has been reported in 15–43% . Females are misdiagnosed more frequently than males with suggested reasons being pre-assumptions of women having migraine, a lower male:female ratio in chronic patients and differences in the clinical presentation . Self-rated health is strongly associated with mortality, making it an important instrument when investigating the burden of a disease . Self-rated health is significantly reduced in ECH and in CCH the odds are tenfold lower of rating their health as ‘good’ or ‘very good’ compared to matched controls . Co-existing depression and anxiety also occur more frequently in people with CH compared to controls . Suicidal thoughts are reported by 47–55% and attempts by 1.3–2% of people with CH . “Although I’m not in bout, I still fear attacks every day” Growing evidence indicates that people with ECH are both physiologically and psychologically impacted even in remission periods, implying that ECH is also a chronically disabling disease despite the cyclic nature. Self-rated quality of life is significantly lower in people with ECH during remission compared with healthy controls perhaps partly explained by worrying and avoidance behavior towards potential triggers In addition, observational studies suggest that parameters such as sleep and other hypothalamic functions are altered not only during the active disease phase but also up to one year in remission i.e., the attacks might be regarded as “the tip of the iceberg” in relation to pathophysiological activity. An extensive epidemiological study from Sweden has shown that twice as many people with CH than age, sex and demographically matched controls were on disability pension (10.3% vs. 5.8%) which is in agreement with previous studies . The number of sick-days are also significantly higher for people with CH than controls . The economic burden of CH is significant and a call for attention . The direct costs of medication and healthcare services in a Danish study sum up to 5.178 euros per patient per year (CCH 9,158 euros), mainly due to acute medication and hospital admittance . A German study from 2011 found direct costs per patient to be 4,737 euros for half a year and including indirect costs the yearly costs amount to 11,739–11,926 euros per patient . Although not directly comparable (reports costs per bout), An Italian study found that the total cost of a CH bout was €4398 per patient and total cost of CCH was 5.4 times higher than ECH (€13,350) . Treatment is initiated using a trial-and-error approach, and the close follow-up required is challenging in most health-care settings due to organizational and resource limitations. It is a significant limitation in CH treatment that existing guidelines are based on very few and small studies not fulfilling modern standards. They overall agree on first and second choice treatments but vary on recommended dosages and electrocardiogram (ECG) monitoring intervals. When exceeding the first- and second-line options, the evidence is even more sparse. The treatment of CH can be divided into three categories 1) acute treatment aiming to abort the single attacks, 2) preventive treatment that taken at regular intervals aims to lower attack frequency and pain intensity, and lastly 3) transitional treatment that can be used as a short-lasting preventive if bouts are short or, more often, to obtain a “bridging” effect in the period a preventive is titrated to its therapeutic threshold (Fig. ). The goal must always be to suppress attacks with preventives minimizing the need for acute treatment. Treatment with 100% oxygen and triptans are the cornerstones of acute CH treatment and it is recommended to prescribe both. Simple analgesics and opioids are not effective . In addition, an inappropriate use of opioids increases the risk of substance abuse. Oxygen One large randomized, double-blind, placebo-controlled crossover study with 109 participants showed that 78% inhaling 100% oxygen were pain-free or reported an adequate effect after 15 min compared to 20% receiving air . These findings were confirmed in a small double-blinded cross-over study with 19 participants and in an open-label study in 33 episodic- and 19 chronic participants . An international survey covering 56 countries (23% of responders were from Europe) found that more than half of the participants reported use of oxygen in CH to be “very effective” or give “complete” remission . Data from the Danish CH Survey similarly found that 75% had a 50% response to oxygen . Oxygen is generally safe and without side-effects, however, it is unhandy to carry around and use multiple times a day outside of the residence. Furthermore, availability differs between countries. Oxygen is fully reimbursed (or with minor restrictions) in only 12 countries accounting for 63% of the European population . Triptans Triptans are easy to carry along, but they are costly and official guidelines limit the use to twice per day. However, based on an individual assessment and lack of options, many people with CH exceed this limit in agreement with their neurologist. Triptans are contraindicated in people with certain cardiovascular diseases as the vasoconstrictive effect has been theorized to increase the risk of stroke and acute myocardial infarction. Again, people may be so burdened that usage may still be offered after thorough information. The administration route affects efficacy. Subcutaneous injectable sumatriptan has shown to induce complete pain freedom in 20 min in 75% of participants; sumatriptan nasal spray induced pain freedom in 47% versus 18% for placebo at 30 min; in episodic participants, oral zolmitriptan 10 mg induced meaningful pain reduction in 47% versus 29% for placebo; and the effect of nasal zolmitriptan 5 and 10mg within 30 min was 40% and 62% . Oral formula is generally not recommended due to its slower effect but may be the only available treatment in many countries. Triptans are reimbursed completely or with minor restrictions in 16 European countries, representing 66% of the population . However, it is the authors’ experience that people on social support still find it difficult to pay for injectable/nasal sumatriptan. Overall, oxygen and triptans are effective, however, the major problem lies with the high number of daily CH-attacks which necessitates too high (off-label) daily intake of triptans, with non-responders, patients with limited access to the medication (not available or too expensive), or patients with cardiovascular comorbidities. These people may end up with a problematic use of opioids or illegal drugs . In pregnancy and during breastfeeding, treatment with oxygen is considered safe, recommendations on use of sumatriptan are varying, from limited use to no use . One large randomized, double-blind, placebo-controlled crossover study with 109 participants showed that 78% inhaling 100% oxygen were pain-free or reported an adequate effect after 15 min compared to 20% receiving air . These findings were confirmed in a small double-blinded cross-over study with 19 participants and in an open-label study in 33 episodic- and 19 chronic participants . An international survey covering 56 countries (23% of responders were from Europe) found that more than half of the participants reported use of oxygen in CH to be “very effective” or give “complete” remission . Data from the Danish CH Survey similarly found that 75% had a 50% response to oxygen . Oxygen is generally safe and without side-effects, however, it is unhandy to carry around and use multiple times a day outside of the residence. Furthermore, availability differs between countries. Oxygen is fully reimbursed (or with minor restrictions) in only 12 countries accounting for 63% of the European population . Triptans are easy to carry along, but they are costly and official guidelines limit the use to twice per day. However, based on an individual assessment and lack of options, many people with CH exceed this limit in agreement with their neurologist. Triptans are contraindicated in people with certain cardiovascular diseases as the vasoconstrictive effect has been theorized to increase the risk of stroke and acute myocardial infarction. Again, people may be so burdened that usage may still be offered after thorough information. The administration route affects efficacy. Subcutaneous injectable sumatriptan has shown to induce complete pain freedom in 20 min in 75% of participants; sumatriptan nasal spray induced pain freedom in 47% versus 18% for placebo at 30 min; in episodic participants, oral zolmitriptan 10 mg induced meaningful pain reduction in 47% versus 29% for placebo; and the effect of nasal zolmitriptan 5 and 10mg within 30 min was 40% and 62% . Oral formula is generally not recommended due to its slower effect but may be the only available treatment in many countries. Triptans are reimbursed completely or with minor restrictions in 16 European countries, representing 66% of the population . However, it is the authors’ experience that people on social support still find it difficult to pay for injectable/nasal sumatriptan. Overall, oxygen and triptans are effective, however, the major problem lies with the high number of daily CH-attacks which necessitates too high (off-label) daily intake of triptans, with non-responders, patients with limited access to the medication (not available or too expensive), or patients with cardiovascular comorbidities. These people may end up with a problematic use of opioids or illegal drugs . In pregnancy and during breastfeeding, treatment with oxygen is considered safe, recommendations on use of sumatriptan are varying, from limited use to no use . Preventive treatment is the cornerstone of CH management in order to suppress or limit the extreme pain attacks. Even for people with effective acute treatment the effect is not instant. Therefore, it is recommended, but not evidence based, that people with ECH start preventive treatment as soon as attacks are emerging and to slowly taper off after two weeks without attacks (allowing for swift increase again if attacks reemerge). In CCH there is need for continuous prevention. The existing treatment recommendations are based on small and low level of evidence studies (listed in Table ). This would not necessarily be a problem if clinical experience was that they were well tolerated and effectful, however, this is not the case. We will review the existing literature on the three major preventive treatment options in CH: verapamil, lithium and topiramate. Verapamil The understanding of its mechanism of action in CH remains unclear. Among the suggested are vasospasm inhibition , GABA-A inhibition , circadian rhythm modulation , and hyperpolarization-activated cyclic nucleotide-gated channel mediated decreasing of parasympathetic activity . The rationale for using verapamil as a first-line preventive treatment is based on two randomized controlled trials (RCTs) and three open label studies. The first double-blind cross-over RCT lasted 23 weeks comparing verapamil 360 mg/day with lithium 900 mg/day in CCH (no placebo group). Only 50% on verapamil and 37% on lithium experienced a reduction in an unspecified headache index . It remains unknown what the index included, however, the study established the foundation for verapamil treatment. The second study randomized 30 people with ECH to either 14 days with verapamil 360 mg/day or placebo. In the second week, 80% on verapamil reported 50% or higher reduction in attack frequency compared with 0% on placebo, however, just 27% became attack free . An open-label study from 1983 tested several potential drug targets, including verapamil 160–720 mg/day finding an unspecified effect in five participants with CCH . This was followed by another open-label study from 1989 with 33 episodic- and 15 chronic participants receiving verapamil 240–1200 mg/day. Improvement was seen after an average of 1.7 weeks and 5 weeks for ECH and CCH, respectively. As many as 69% reported a more than 75% effect . In the largest open-label study of 52 episodic- and 18 chronic participants, the authors aimed to personalize timing and dosage of verapamil. With verapamil 360–920 mg/day 94% episodic- and 56% chronic participants became attack free. Based on previous bout length and intensifying clinical presentation, people with ECH were included if expected bout length exceeded a few days . The major limitation of the open label studies is the lack of a control group as the level of placebo response and spontaneous remission is unknown. For CH there is an extreme variability in bout length from bout to bout and as many as 20% may change phenotype over time . In sharp contrast to these reports, new data from 400 consecutively recruited Danish persons with CH show that only 44% episodic- and 34% chronic participants reported a reduction in attack frequency more than 50%. Only 14% reported complete relief of attacks on verapamil . Efficacy does not seem to be associated with high/low dose or sex, but verapamil is more effective in ECH compared to CCH . Side effects such as constipation, tiredness and oedema are reported in 12–86% of participants . Overall, it is the authors’ experience, that people with CH are willing to accept substantial side effects if they experience some relief. Also, we will dispute that most have side effects on effective dosages of verapamil, but only very limited evidence of this is available. A whole range of cardiac contra-indications exist for verapamil including untreated 2 nd and 3 rd degree atrioventricular block, bradycardia and heart failure, severe hypotension, and Wolff-Parkinson-White-Syndrome. In addition, the list of possible interactions is long including frequently used medications as atorvastatin (increased risk of myopathy and rhabdomyolysis), domperidone (risk of prolonged QT interval), clopidogrel (decreased antiplatelet effect), fluconazole (increased verapamil exposure) and lithium (neurotoxicity and bradycardia) . Lithium Lithium is recommended as second line treatment, but is most suited for people with CCH and has several limitations, discussed in the following section. Slowly titrating to a dose of 600-1500mg daily with serum levels between 0.6 and 0.8 is considered optimal. It is speculated that treatment with lithium was initiated due to the cyclic nature of the disease (as is the case for bipolar disease). Only two randomized controlled studies exist, one showing no effect in 27 episodic participants (note only 1 week of treatment) and one comparing lithium treatment with verapamil showing that 37% of chronic participants experienced a reduction in an unspecified CH index . The effect of lithium has been reviewed since 1981 where several open label studies indicated a good effect, particularly in CCH . The main problem with lithium is the high degree of acute and long-term side effects and that treatment requires frequent blood samples to titrate the dosage according to the therapeutic index, to avoid toxic serum levels and to control for adverse effects on kidney, liver and thyroid function . This is burdensome for both the patient and the health-care system. Topiramate Is also recommended as second line treatment and may be used in both ECH and CCH. The anticonvulsant drug is recommended in doses of 50–200 mg daily by the 2019 EHF guidelines . The preventive effect of topiramate has been investigated in two small open label studies with 13 and 33 participants with conflicting results. The largest study found more than 50% reduction in only 21% of participants (study period of 20 days) and the smallest showed that 75%, mostly episodic participants, went into remission . The benefit of topiramate is that cardiac monitoring is not required. However, depression is a known side-effect; especially in people with pre-existing depressive symptoms, which is reported in up to 67% of people with CH . Other prominent side-effects, often leading to discontinuation in the clinic, are cognitive impairment and paresthesia. Topiramate cannot be used in people with kidney stones. Other preventive options In treatment refractory patients, it may be necessary to try medical treatment with even lower level of evidence either as monotherapy or as add-on. Treatment with 10 mg oral melatonin was shown effective in a small RCT in 5 out of 10 people with ECH compared to none in the placebo group . In a small case–control study, mainly with chronic patients, no effect was observed . OnabotulinumtoxinA may have some additional effect in treatment refractory CCH , however, evidence is sparce and the pathophysiological mechanism behind an effect using a migraine protocol is uncertain. Although not available in most European countries, it is worth to mention that retrospective chart review has suggested a possible effect of short-term (3–5 days in hospital) intravenous treatment of dihydroergotamine and 1–2 mg ergotamine without caffeine given at night may also prevent nightly attacks and nausea prevented in advance . LSD and Psilocybin Use of illicit drugs like psilocybin, lysergic acid diethylamide (LSD) and gamma-hydroxybutyrat (GHB) are more frequently reported by people with CH compared to the general population . Several retrospective surveys and case reports indicate that psilocybin and LSD/the non-halluzinogenic bromo-LSD in some cases may abort attacks and extend the duration of remission periods . An explorative RCT with 17 episodic and chronic participants investigated microdosing of psilocybin finding no difference between groups in efficacy and side effects . As with other medical treatment, RCTs are needed to evaluate efficacy and safety before treatment can be recommended. The understanding of its mechanism of action in CH remains unclear. Among the suggested are vasospasm inhibition , GABA-A inhibition , circadian rhythm modulation , and hyperpolarization-activated cyclic nucleotide-gated channel mediated decreasing of parasympathetic activity . The rationale for using verapamil as a first-line preventive treatment is based on two randomized controlled trials (RCTs) and three open label studies. The first double-blind cross-over RCT lasted 23 weeks comparing verapamil 360 mg/day with lithium 900 mg/day in CCH (no placebo group). Only 50% on verapamil and 37% on lithium experienced a reduction in an unspecified headache index . It remains unknown what the index included, however, the study established the foundation for verapamil treatment. The second study randomized 30 people with ECH to either 14 days with verapamil 360 mg/day or placebo. In the second week, 80% on verapamil reported 50% or higher reduction in attack frequency compared with 0% on placebo, however, just 27% became attack free . An open-label study from 1983 tested several potential drug targets, including verapamil 160–720 mg/day finding an unspecified effect in five participants with CCH . This was followed by another open-label study from 1989 with 33 episodic- and 15 chronic participants receiving verapamil 240–1200 mg/day. Improvement was seen after an average of 1.7 weeks and 5 weeks for ECH and CCH, respectively. As many as 69% reported a more than 75% effect . In the largest open-label study of 52 episodic- and 18 chronic participants, the authors aimed to personalize timing and dosage of verapamil. With verapamil 360–920 mg/day 94% episodic- and 56% chronic participants became attack free. Based on previous bout length and intensifying clinical presentation, people with ECH were included if expected bout length exceeded a few days . The major limitation of the open label studies is the lack of a control group as the level of placebo response and spontaneous remission is unknown. For CH there is an extreme variability in bout length from bout to bout and as many as 20% may change phenotype over time . In sharp contrast to these reports, new data from 400 consecutively recruited Danish persons with CH show that only 44% episodic- and 34% chronic participants reported a reduction in attack frequency more than 50%. Only 14% reported complete relief of attacks on verapamil . Efficacy does not seem to be associated with high/low dose or sex, but verapamil is more effective in ECH compared to CCH . Side effects such as constipation, tiredness and oedema are reported in 12–86% of participants . Overall, it is the authors’ experience, that people with CH are willing to accept substantial side effects if they experience some relief. Also, we will dispute that most have side effects on effective dosages of verapamil, but only very limited evidence of this is available. A whole range of cardiac contra-indications exist for verapamil including untreated 2 nd and 3 rd degree atrioventricular block, bradycardia and heart failure, severe hypotension, and Wolff-Parkinson-White-Syndrome. In addition, the list of possible interactions is long including frequently used medications as atorvastatin (increased risk of myopathy and rhabdomyolysis), domperidone (risk of prolonged QT interval), clopidogrel (decreased antiplatelet effect), fluconazole (increased verapamil exposure) and lithium (neurotoxicity and bradycardia) . Lithium is recommended as second line treatment, but is most suited for people with CCH and has several limitations, discussed in the following section. Slowly titrating to a dose of 600-1500mg daily with serum levels between 0.6 and 0.8 is considered optimal. It is speculated that treatment with lithium was initiated due to the cyclic nature of the disease (as is the case for bipolar disease). Only two randomized controlled studies exist, one showing no effect in 27 episodic participants (note only 1 week of treatment) and one comparing lithium treatment with verapamil showing that 37% of chronic participants experienced a reduction in an unspecified CH index . The effect of lithium has been reviewed since 1981 where several open label studies indicated a good effect, particularly in CCH . The main problem with lithium is the high degree of acute and long-term side effects and that treatment requires frequent blood samples to titrate the dosage according to the therapeutic index, to avoid toxic serum levels and to control for adverse effects on kidney, liver and thyroid function . This is burdensome for both the patient and the health-care system. Is also recommended as second line treatment and may be used in both ECH and CCH. The anticonvulsant drug is recommended in doses of 50–200 mg daily by the 2019 EHF guidelines . The preventive effect of topiramate has been investigated in two small open label studies with 13 and 33 participants with conflicting results. The largest study found more than 50% reduction in only 21% of participants (study period of 20 days) and the smallest showed that 75%, mostly episodic participants, went into remission . The benefit of topiramate is that cardiac monitoring is not required. However, depression is a known side-effect; especially in people with pre-existing depressive symptoms, which is reported in up to 67% of people with CH . Other prominent side-effects, often leading to discontinuation in the clinic, are cognitive impairment and paresthesia. Topiramate cannot be used in people with kidney stones. In treatment refractory patients, it may be necessary to try medical treatment with even lower level of evidence either as monotherapy or as add-on. Treatment with 10 mg oral melatonin was shown effective in a small RCT in 5 out of 10 people with ECH compared to none in the placebo group . In a small case–control study, mainly with chronic patients, no effect was observed . OnabotulinumtoxinA may have some additional effect in treatment refractory CCH , however, evidence is sparce and the pathophysiological mechanism behind an effect using a migraine protocol is uncertain. Although not available in most European countries, it is worth to mention that retrospective chart review has suggested a possible effect of short-term (3–5 days in hospital) intravenous treatment of dihydroergotamine and 1–2 mg ergotamine without caffeine given at night may also prevent nightly attacks and nausea prevented in advance . Use of illicit drugs like psilocybin, lysergic acid diethylamide (LSD) and gamma-hydroxybutyrat (GHB) are more frequently reported by people with CH compared to the general population . Several retrospective surveys and case reports indicate that psilocybin and LSD/the non-halluzinogenic bromo-LSD in some cases may abort attacks and extend the duration of remission periods . An explorative RCT with 17 episodic and chronic participants investigated microdosing of psilocybin finding no difference between groups in efficacy and side effects . As with other medical treatment, RCTs are needed to evaluate efficacy and safety before treatment can be recommended. Current preventive medications need to be titrated up to an effective dosage, and an intermediate treatment consisting of corticosteroids can therefore be applied if patients are burdened by many attacks . The European 2019 EHF guidelines define treatment with either prednisone taken orally or given as a greater occipital nerve block as intermediate treatment, whereas the 2023 European Academy guidelines and the 2006 EFNS guidelines include them under preventive treatment . Prednisone The exact effect mechanism is poorly understood, but oral corticosteroids have been suggested to attenuate trigeminal activation and counteract hypothalamic dysfunction . A multi-centre, double-blind, RCT from 2021 showed a fast onset of 100 mg prednisone in 118 episodic participants with 7.1 attacks compared with 9.5 in the placebo group within the first week . Two studies from 1978 and 1975; the first a case-series in 19 participants showed that 58% became attack free with 10–80 mg prednisone daily for 3–10 days and the second, a double-blind single cross-over study, also indicated efficacy . Short-term use is considered effective and safe (although people on rare occasions may develop psychiatric symptoms), but continuous use may increase the risk of known systemic side effects of prednisone (opportunistic infections, hypertension, osteoporosis and metabolic diseases such as type 2 diabetes). The exact effect mechanism is poorly understood, but oral corticosteroids have been suggested to attenuate trigeminal activation and counteract hypothalamic dysfunction . A multi-centre, double-blind, RCT from 2021 showed a fast onset of 100 mg prednisone in 118 episodic participants with 7.1 attacks compared with 9.5 in the placebo group within the first week . Two studies from 1978 and 1975; the first a case-series in 19 participants showed that 58% became attack free with 10–80 mg prednisone daily for 3–10 days and the second, a double-blind single cross-over study, also indicated efficacy . Short-term use is considered effective and safe (although people on rare occasions may develop psychiatric symptoms), but continuous use may increase the risk of known systemic side effects of prednisone (opportunistic infections, hypertension, osteoporosis and metabolic diseases such as type 2 diabetes). The effect is thought to occur through a modulatory effect on the nociceptive processing in trigeminal neurons via the trigemino-vascular system . Two double-blind RCTs exist. The first investigated three injections of cortivazol in 1 week in 28 episodic- and 15 chronic participants. Two to four days after the third injection, 95% in the active group had two or less daily attacks compared to 55% in the placebo group. Attack frequency was also reduced to one third of that in the placebo group in the first 15 days . Attack-freedom was seen in 85% of 16 episodic- and 7 chronic participants one week after a single dosage of betamethasone compared with none receiving placebo . GON blocks has shown higher efficacy in episodic participants compared to chronic in a prospective open-label study . Most clinics use 2.5 mL betamethasone (rapid and long acting) plus 0.5 mL lidocaine 2% sc ipsilateral to the pain. Side effects of short term and long-term use are equal to oral use. Having this in mind, in the represented clinics of this paper, injections in 3 months intervals are considered safe. Repeated nerve blocks in medically refractory people with CCH led to transient attack freedom in only one third . GON blocks are generally accepted for usage in pregnant and breastfeeding women . Although there is no evidence from clinical trials, it is the authors clinical experience that frovatriptan and naratriptan may be used for transitional prophylaxis in cases where GON blocks are inefficient or contraindicated or as a short term mini-preventive in people with several nightly attacks and short bouts . People with CH have a high burden of cardio- and cerebrovascular (CVD) risk factors, including high body mass index (for males) and smoking, which is reported by 48–68% of patients in recent publications . These factors are known to increase the risk of CVD. Cross-sectional studies have shown that overall multimorbidity including CVD occur more frequently in people with CH than in matched controls . Therefore, use of triptans, verapamil and lithium may be worrisome. Triptans have an extracranial vasoconstrictive effect and are relatively contraindicated in people with known CVD. Although prescribers seem to take this into account, a novel Italian study showed that 4% of male patients were treated with triptans despite having a CVD . Retrospective data on CH patients with more than two daily dosages have not reported serious adverse events, however official guidelines still limit the daily use to two. With a daily attack frequency of up to eight, triptans can seldomly stand alone. As discussed above, verapamil is the first-line preventive medication. The highest recommended daily dose in cardiology is 480 mg and combination with betablockers are not recommended due to risk of atrioventricular block. ECG should be assessed before initiation - each time and before increasing above 400 mg, 600 mg, 800 mg and 1000 mg. In patients treated with higher dosages than 480 mg, an annual ECG is recommended and in case of sinus bradycardia, 1. degree AV block or symptoms as syncope, fatigue or dizziness, Holter should be performed. From clinical experience, patients often forget ECG controls when initiating treatment when a new bout begin or if increasing the dosage. This concerning issue is also the finding in audit data on 217 English CH participants on verapamil showed that 41% received verapamil treatment without an ECG and among those with, 19% had arrhythmias, with prolonged PR interval being the most frequent . Lithium is known to induce benign ECG alterations and near fatal arrhythmias but may also have cardioprotective potential. At therapeutic lithium levels, T-wave depressions and sinus node dysfunction are the most common ECG findings. Arrhythmias are mainly noticed with high serum lithium. A baseline ECG is recommended and in case of elevated serum lithium levels or symptoms of arrhythmias, a novel ECG or Holter monitoring is needed. An era of new specifically targeted treatments with few side effects are emerging in the headache field. Anti-cGRP therapy cGRP antibodies are the first targeted medical treatment possibilities in CH based on a pathophysiological understanding of the disease . CGRP plasma levels increases during spontaneous and nitroglycerin induced attacks and were reduced to baseline levels after spontaneous, sumatriptan- and oxygen-induced termination . Further, a double-blind RCT found that cGRP infusion triggered attacks in people with chronic and active episodic CH, but not in remission . In a phase III RCT, a 50% attack reduction was seen in 71% of episodic participants treated with galcanezumab vs. 53% treated with placebo and mean weekly attack frequency across weeks 1 through 3 was significantly reduced with 40% . Of most importance, cGRP antibodies are generally very well tolerated with few side effects. On this basis, galcanezumab was approved for the treatment of ECH in the US and Canada but not in Europe as the European Medical Agency found effect and evidence to be too sparce . A more recent Korean open label study on 240 mg galcanezumab in ECH supports the findings of the RCT . Galcanezumab did not meet the primary or secondary efficacy endpoint in CCH . Studies on fremanezumab in ECH and CCH were aborted, as futility analyses concluded that the primary endpoints were unlikely to be met and the recent study in ECH with eptinezumab has stopped further inclusion after futility analyses. There is an ongoing open label trial with eptimezumab for CCH and erenumab for CCH . Recently, recommendations on optimal RCT design in ECH and CCH have emerged . Neuromodulation and invasive procedures Neuromodulation has become an emerging and viable treatment option for medically treatment refractory CCH patients e.g. treatment failure of three preventive drugs . Despite being restricted to a minority, invasive and very costly, neurostimulation greatly reduces patient burden and subsequently both indirect and direct healthcare costs . In extremely severe cases, deep brain stimulation has been described in case series, but proper trials into efficacy, safety and the optimal stimulation target are lacking . After several case series the ICON (intractable chronic cluster headache) trial provided evidence for the efficacy of occipital nerve stimulation (ONS) in an international, multicenter phase 3 RCT. In the 131 chronic participants, mean attack frequency was reduced from 15.8 weekly attacks to 7.4 during the one-year study period for both high and low electrical dose. ONS is now reimbursed for medically intractable CCH in several European countries. There are two RCTs investigating sphenopalatine ganglion stimulation (SPG) versus sham stimulation in CCH as acute treatment finding a 10% difference achieving pain-freedom in 15 min versus sham . Long-term open label studies found that 33% experienced a preventive effect and that 78% of attacks were successfully treated with SPG and 74% of participants with CCH could reduce or remain off all preventive medication when using SPG stimulation, however treatment is currently unavailable . Navigation guided botulinumtoxin injections targeting the SPG is currently being investigated in a multinational RCT as pilot data have indicated safety and efficacy in CCH . There are three randomized trials assessing vagus nerve stimulation (NVS) as both acute and preventive treatment . As acute treatment, there was no difference in pain freedom after 15 min between NVS and sham. For prevention, the open-label study in CCH found a significant reduction of 4 weekly attacks in the NVS group versus sham. The 50% responder rate was 40% in the NVS versus 8% in the sham group . NVS seems a viable but fairly costly option in those patients who are unresponsive or have a contraindication against triptans. cGRP antibodies are the first targeted medical treatment possibilities in CH based on a pathophysiological understanding of the disease . CGRP plasma levels increases during spontaneous and nitroglycerin induced attacks and were reduced to baseline levels after spontaneous, sumatriptan- and oxygen-induced termination . Further, a double-blind RCT found that cGRP infusion triggered attacks in people with chronic and active episodic CH, but not in remission . In a phase III RCT, a 50% attack reduction was seen in 71% of episodic participants treated with galcanezumab vs. 53% treated with placebo and mean weekly attack frequency across weeks 1 through 3 was significantly reduced with 40% . Of most importance, cGRP antibodies are generally very well tolerated with few side effects. On this basis, galcanezumab was approved for the treatment of ECH in the US and Canada but not in Europe as the European Medical Agency found effect and evidence to be too sparce . A more recent Korean open label study on 240 mg galcanezumab in ECH supports the findings of the RCT . Galcanezumab did not meet the primary or secondary efficacy endpoint in CCH . Studies on fremanezumab in ECH and CCH were aborted, as futility analyses concluded that the primary endpoints were unlikely to be met and the recent study in ECH with eptinezumab has stopped further inclusion after futility analyses. There is an ongoing open label trial with eptimezumab for CCH and erenumab for CCH . Recently, recommendations on optimal RCT design in ECH and CCH have emerged . Neuromodulation has become an emerging and viable treatment option for medically treatment refractory CCH patients e.g. treatment failure of three preventive drugs . Despite being restricted to a minority, invasive and very costly, neurostimulation greatly reduces patient burden and subsequently both indirect and direct healthcare costs . In extremely severe cases, deep brain stimulation has been described in case series, but proper trials into efficacy, safety and the optimal stimulation target are lacking . After several case series the ICON (intractable chronic cluster headache) trial provided evidence for the efficacy of occipital nerve stimulation (ONS) in an international, multicenter phase 3 RCT. In the 131 chronic participants, mean attack frequency was reduced from 15.8 weekly attacks to 7.4 during the one-year study period for both high and low electrical dose. ONS is now reimbursed for medically intractable CCH in several European countries. There are two RCTs investigating sphenopalatine ganglion stimulation (SPG) versus sham stimulation in CCH as acute treatment finding a 10% difference achieving pain-freedom in 15 min versus sham . Long-term open label studies found that 33% experienced a preventive effect and that 78% of attacks were successfully treated with SPG and 74% of participants with CCH could reduce or remain off all preventive medication when using SPG stimulation, however treatment is currently unavailable . Navigation guided botulinumtoxin injections targeting the SPG is currently being investigated in a multinational RCT as pilot data have indicated safety and efficacy in CCH . There are three randomized trials assessing vagus nerve stimulation (NVS) as both acute and preventive treatment . As acute treatment, there was no difference in pain freedom after 15 min between NVS and sham. For prevention, the open-label study in CCH found a significant reduction of 4 weekly attacks in the NVS group versus sham. The 50% responder rate was 40% in the NVS versus 8% in the sham group . NVS seems a viable but fairly costly option in those patients who are unresponsive or have a contraindication against triptans. As long as people with CH have to endure and fear CH attacks, the impact on their lives and the associated societal burden remains enormous. Effective preventive medication that can be taken as soon as attacks emerge, with a rapid onset of effect and few side effects must be the ultimate goal when treating CH. New preventive treatment, investigated according to modern standards and of high quality, are needed. They may be more expensive but at present the major costs are due to acute medication and hospital admissions. With an effective preventive treatment these costs are expected to be greatly reduced, adding to overall cost-effectiveness. Other challenges are accessibility to existing acute and preventive treatment, increasing knowledge of CH in the general population and general practitioner to obtain better social support and not least to secure a smoother diagnostic process. |
Ophthalmology emergency department visits in a Brazilian tertiary
hospital over the last 11 years: data analysis | 91d8026c-361d-4075-ac97-109e9c821b21 | 11826528 | Ophthalmology[mh] | Emergency departments (EDs) are an essential part of patient care, with the unique capability to provide 24-hour full-range immediate medical services . Conditions that require urgent ocular care, such as ocular trauma, infections, retinal detachment, and uveitis, are associated with a high risk of visual impairment if they do not receive appropriate treatment . Despite representing a small body surface, the eyes are the third most frequent organ, after hands and feet, affected by injuries . Besides, vision is an essential overall health quality aspect, and vision loss is a significant risk factor for functional decline . However, crowding ophthalmology EDs is a real situation in most countries , leading to delayed and low-quality care for real urgent cases. Nonurgent visits, such as those for glass prescription, dry eye syndrome, blepharitis, and chalazion, have been reported between 8% and 62% of total patients’ visits , especially at self-referral services. The high number of nonurgent visits to EDs is an issue described in previous studies, and it is probably a significant aspect of crowding in waiting rooms and delay in medical care. In Brazil, we have another important factor, as the majority of the population depends only on our public health system (SUS-Sistema Único de Saúde) to access health care, which is universally accessible and free. SUS is divided into three care complexity levels (primary, secondary, and tertiary care) that should work as an integrated network to organize patient access from primary care to the other levels . Many patients seek ED attendance for nonurgent complaints, probably because of lacking information and facing difficulties in accessing ophthalmological assistance in primary care . For years, the challenges in access to care could make originally nonurgent cases arrive at the emergency room with an advanced disease phase having a poor prognosis and demanding an urgent intervention, such as in cases of glaucoma and diabetic retinopathy. There are few studies on ophthalmology ED visits profiles in Brazilian hospitals, especially assessing trends from the last 5 years. Only a few studies in the world analyzed abundant data from ophthalmology visits.The Universidade Federal de São Paulo (UNIFESP) ophthalmology ED is linked to Hospital São Paulo, a tertiary-care 24-hour public open-access hospital located in São Paulo, which belongs to UNIFESP. Despite high visits volume, there is no ophthalmological triage system in the hospital. In addition to healthcare purposes, it can offer education for residents in training. This study aimed to evaluate the visits profile to UNIFESP ED over the last 10 years, evaluating the causes for the change in inflow and possible proposals to improve the service flow. A cross-sectional retrospective study was conducted based on data analysis from all patients admitted to the ophthalmology ED of Hospital São Paulo from January 2009 to December 2019. This study was approved by the Institutional Ethics Committee of UNIFESP and followed Helsinki principles. Hospital São Paulo is a state-funded, free 24/7 emergency hospital in São Paulo, Brazil, with an assistance area in the city’s South Zone, covering a 5-million population. The permanent staff comprises ophthalmology residents (four during regular diurnal and 2 nocturnal weekdays and weekends full-time) and two ophthalmologists. The data were collected from the electronic medical records available in the hospital database. The following data that coordinates from ED medical charts were retrieved by the hospital information technology specialists: patient-internal registration code, date and hour, age at the visit, sex, informed zip code, and ICD-10 (International Classification of Diseases-10), as informed by the physician. ICD-10 chapter 7 (“Diseases of the eye and adnexa”, codes H00-H59) and chapter 19 (“Injury, poisoning and certain other consequences of external causes”, codes S00-T88) were used. Retrieved data were compiled in an anonymized spreadsheet for subsequent statistical analysis. Initially, all patients were considered for statistical analysis. In postanalysis, we excluded patients without identifiable diagnoses and completed medical care records. Among different data, epidemiologic parameters, medical diagnosis, number of visits according to day hour, day of the week, month and year, and number of visits according to medical staff were analyzed. During the 11 years of the study, there were 634,726 visits to Hospital São Paulo ophthalmology ED, with a mean of 57,702 ± 7,390.5 per year (± standard deviation), going from 50,729 in 2009 to 70,623 visits in 2019, representing an increase at the inflow of 19,854 (39.2%) ( and ). The analysis of the number of visits per day showed a mean of 158.1 ± 34.3 visits per day. The month that showed the highest inflow was March 2011, with 410.9 visits per day, and the lowest inflow was in April 2017, with 65.5 visits per day. While evaluating the inflow and excluding 2011 (181.4 ± 83.8 visits per day) and 2017 (125.2 ± 20.5) from the analysis, as they were out of pattern compared to the year before and after each one, the highest inflow was during the fourth quarter, with 165.7 visits per day, and the lowest during the second quarter, with 154.1 visits per day ( and ). The percentage of single-visit patients through the years remained between 64.7% in 2009 (lowest rate) and 72.2% in 2019 (highest percentage). When evaluating a period of 11 years, single-visit patients represented 53.1% of the total visits, which means that 336,704 patients single visited the ED during this period. The median patient age was 38 ± 20.4 years (range 0-101), where patients under 5 years represented 4.8%, and patients over 65 represented 12.4%. The age profile did not show a significant change over the years, with the lowest mean of 39.3 ± 20.5 years in 2009 and the highest mean of 41.2 ± 20.0 in 2019. Male patients represented 54.3% of the total proportion of visits . The visits showed substantial variation when comparing regular weekdays and weekends, a variation that was a common pattern over the years. Regular weekday visits represented 80.8% ± 1.4% while analyzing the entire study period, with the lowest percentage of 79.7% in 2019 and the highest percentage of 82.6% of 2011. Visits on Mondays corresponded to 18.2% ± 0.6%, and those on Sundays corresponded to only 7.8% ± 0.5%. Inflow rates tended to progressively reduce from Monday to Friday . The analysis per day period showed that 79.4% of visits occurred from 7 am to 5 pm. The inflow significantly increased between 8 pm and midnight, being 5.8% in 2009 (mean of 8.2 visits) and 9% in 2019 (mean of 17.4), and also between midnight and 5 am, being 1.1% (mean of 1.5 visits) in 2009 and 2.6% (mean of 4.9 visits) in 2019 . The most commonly physician-reported ICD-10 diagnoses were acute conjunctivitis, blepharitis, keratitis, corneal foreign body, subconjunctival hemorrhage, and ocular trauma. ICD-10 data between 2009 and 2014 were not considered for analysis due to a large amount of incomplete data. The analysis between 2015 and 2019, excluding files with missing data (21%), showed that acute conjunctivitis represented 34% (H10), blepharitis 1 represented 6.9% (H01.0), keratitis represented 7% (H16.1; H16.3; H16.8), hordeolum/chalazion represented 6.4% (H00), corneal foreign body represented 6.2% (T15.0), corneal ulcer represented 3.5% (H16.0), ocular trauma represented 3.2% (S05), and subconjunctival hemorrhage represented 2.8% (H11.3) . Emergency consultation is essential to properly manage urgent health problems, such as ocular trauma, uveitis, infections, and retina detachments . However, it provides quick access to ophthalmological evaluation, specifically in the Brazilian health system. Since adequate ophthalmological covering and populational orientation are lacking, crowded EDs are a real issue in Brazil. The number of public ophthalmology EDs in São Paulo has decreased or limited its access over the last years, which could also explain the increased crowding of our service. The increase of 19,854 (+39.2%) visits from 2009 (50,729) to 2019 (70,623) is a significant change in inflow, representing more visits than many previous studies showed per year . This can be explained by the improvement of hospital accessibility and public transport around our ED, such as with the construction of a subway station close to the hospital in 2018. Other explanations include the closure of some public open-access ophthalmology EDs in São Paulo and the reduced access to private health systems in the Brazilian population . Our assistance area covers a 5-million population area in the South Zone of São Paulo but our daily experience shows that many patients come from other zones or even from other federative units in Brazil, and a future analysis based on patient origin is required for a better understanding of the underlying reasons of this phenomenon. São Paulo experienced an acute conjunctivitis epidemic, which corresponds to the abruptly increased inflow during the first semester in 2011 . UNIFESP ED was not completely open for visits during the second quarter of 2017, which explains the decreased inflow. Data from previous studies on Brazilian ophthalmology EDs showed 1,224 visits in 3 months in 2000 (13.6 per day) at service in Sergipe, Brazil , 581 visits per week (83 per day) during 2006 at a tertiary hospital in São Paulo, Brazil , 8,346 visits in 5 months of 2005 (55.6 per day) at a tertiary hospital in Belo Horizonte, Brazil , and 8,689 visits during 2009 (32.8 per day) at a tertiary hospital in Goiânia, Brazil . We could not find newly published studies that evaluated Brazilian ophthalmology EDs from the last 5 years or evaluated such a high number of visits (70,623 visits in 2019 with a mean of 193.5 visits per day). The comparison with previous Brazilian studies shows different situations in cities in Brazil and the lack of recent studies for comparison. We believe that the increase in volume and a high number of nonurgent visits are an issue in most Brazilian public ophthalmological services. The increase of 39.2% found in our analysis is higher compared to previous studies from other countries. A previous study compared the change in eye-related trends to their ED in Beirut, Lebanon, from 1997 to 2012, finding a less significant increase in the inflow of 39,158 to 46,363 (+ 18%) during a 15-years period . Another similar study found an increase in the inflow of 11% from 2001 to 2014 based on data analysis from more than 11 million visits in the mentioned period in the United States . Comparing profiles by seasonal distribution, 27,120 visits were evaluated in 2013 at Turkey ophthalmology ED . The most common diagnoses comprised acute conjunctivitis, blepharitis, keratitis, corneal foreign body, and hordeolum/chalazion, which is a similar profile compared to previous reports . In previous reports, corneal foreign bodies appear as the most common diagnosis . Another study found conjunctivitis, followed by a corneal foreign body, as the most common diagnosis at a tertiary hospital in São Paulo in 2006 . Nonurgent diseases, such as hordeolum and blepharitis, represent more than 23% of Hospital São Paulo ED cases. A deeper analysis of each file during a shorter period could help better differentiate between urgent and nonurgent cases and their social-demographic profile and clinical evolution. The ophthalmic coverage and access in our area have been decreasing over the years, and patients have tried to use the ED service as a triage service. However, this does not represent an appropriate counter-referral system that could accept those patients after visiting the ED. Urgent cases are mostly accepted by UNIFESP Ambulatory Eye Clinics, when possible. The low proportion of single-visit patients of only 53.1% (336,704), considering the entire study period, could be explained by representations from outpatient care UNIFESP Ambulatory Eye Clinics and a low success rate to follow up after initial ED consultation as initial healthcare service, necessitating more than one ED consultation. Provided the lack of a well-established triaging system in ophthalmology, it becomes even harder to manage the high volume of patients every day in ED. Rome Eye Scoring System for Urgency and Emergency (RESCUE) was proposed by Rossi in 2007 and tried to establish a tested and effective way to triage ophthalmology patients . It could be a way to apply a triaging system to our service, possibly adapting it to our reality. The reduction of access to 24-hour public ophthalmic emergency services in São Paulo, associated with the inappropriate use of emergency services with nonurgent conditions, resulted from low population’s understanding and difficulty accessing ophthalmological services at primary care. Besides creating new public ophthalmic services, their collaboration is required to distribute patient care better and not overload a few ones. It is also vital to optimize the referral system, which could reduce nonurgent visits overloading emergency services. A previous study in Wilmer Eye Institute (Baltimore, USA) found that allowing same-day access to ambulatory ophthalmology clinics decreases costs to the healthcare system and volume to ED . Overcrowded EDs result in decreased patient satisfaction and increased physicians’ burnout, which is even more expensive for the healthcare system, an important aspect for a public system like SUS in Brazil. Stagg et al. agreed that facilitating clinics access is potentially the most effective way to manage nonurgent cases out of emergency care . Our study has several limitations. Our data lack information on examination findings. Also, ICD-10 was provided by many different doctors and during different training stages, potentially resulting in misdiagnosis. The long period and a large amount of data included for analysis increase biases and possible errors in data that could not be checked. Despite the risk of bias, large data analysis, such as our study, can give a better understanding of the global change in our ED profile over the years, allowing appropriate adjustments to improve the quality of provided service. In conclusion, overcrowded EDs are a real issue in Brazil, with a visits increase of 39.2% from 2009 to 2019 in UNIFESP ED. The hypothesis is that the reduction in emergency services in São Paulo city and inappropriate use of emergency services led to this problem. Solutions would comprise a triage system of urgent cases, remodeling the healthcare system to facilitate access to ambulatory clinics, and educational programs. |
Surface morphology of the oral cavity of redbelly tilapia, Coptodon zillii (Gervais, 1848) | 0a8c2473-631b-4b46-bb13-596ff85b5cec | 11912670 | Digestive System[mh] | The morphological characteristics of a fish’s mouth cavity reveal their structural adaptability to different food types, while their oropharyngeal cavity demonstrates plasticity and structural adaptability, indicating their ability to consume various food particles. Fish adaptations enable them to efficiently capture and process prey, contributing to their success as diverse feeders in aquatic ecosystems . The shape and size of their mouth cavity reveal their feeding behavior and ecological niche. The oral cavity plays a crucial role in food intake and environmental change detection, with oral glands agglutinating food and taste buds playing key functions [ – ]. The morphology of fish feeding apparatus is influenced by various factors such as feeding strategy, environmental conditions, resource utilization, ecological community, structure, and speciation process [ – ]. The previous study examined the mouth and pharyngeal cavities in various fish species, revealing that the mouth cavity is utilized to select desirable food particles or reject undesirable ones, adapted for different food particles in different fish species [ , , , ]. Teleost’s oral cavities morphology, tooth presence, and structure vary significantly among species, making it challenging to apply universal descriptions . Understanding this diversity is crucial for understanding feeding behaviors and ecological roles. Future research should explore how these variations impact the fitness and survival of different fish species in their natural environments. Tilapia, a highly productive and internationally traded food fish, is known for its rapid growth rate and adaptability to different ecological conditions, making it a preferred choice for aquaculture and contributing to the management of freshwater ecosystems by controlling aquatic weeds . Tilapia zillii has economic and ecological importance as food fish, for aquaculture, weed control, commercial aquarium trade, and recreational fishery . Tilapia zillii is a versatile fish that thrives in various water quality and environmental conditions, inhabiting lakes, rivers, wetlands, estuaries, and marine habitats. It can tolerate a wide range of salinity levels, making it a popular species for aquaculture worldwide due to its resilience and adaptability . It is one of the Coptodon zillii species, order Perciformes , family Cichlid , and genus Coptodon . It is widely distributed in the freshwater and lakes in Africa, especially in Egypt. It is a herbivorous fish that feeds on algae, macrophytes, aquatic insects, and fish eggs . Morphological data are scarce on the oral cavity of Tilapia zillii , except that were published about the gills of Tilapia zillii . Therefore, this study will be focused on describing the morphological features, with new insights into the teeth of the upper and lower jaws, oral valves, palate, and tongue through gross anatomical and scanning electron microscopic techniques. The study aims to understand the oral morphology of Tilapia zillii , enhancing our comprehension of its feeding behavior and ecological niche, and potentially impacting aquaculture practices and conservation efforts of this species. This research can also provide insights into the evolutionary adaptations of Tilapia zillii oral structures, revealing its ecological role in aquatic ecosystems. By gaining a better understanding of the species feeding habits, conservation strategies can be better informed to protect this crucial fish species.
The present work was conducted on ten mature healthy fish of the Tilapia zillii (redbelly tilapia) collected in January from fisher’s shops after catching from Burullus Lake, Kafr El-sheikh governorate, Egypt, their weights ranged between 40 and 60 g. and the total length was 12–16 cm. The samples were transported in a plastic aquarium within 2 h to our anatomical lab to perform the gross anatomical and scanning electron microscopic examinations. All fish were anesthetized using benzocaine (4 mg/L). The collected fish followed the guidelines established for the ‘Sampling protocol for the pilot collection of catch, effort, and biological data in Egypt’ . This study has been carried out with ethical permission from the Faculty of Veterinary Medicine, Alexandria University, and approved by the Institutional Animal Care and Use Committee ( ALEXU-IACUC ) ( Approval code: 262/2023) . For gross anatomical examination Five samples were dissected and the roofs were separated from the floors by a scissor and washed with physiological saline (0.9% sodium chloride solution) and the oral cavity was photographed by a digital camera Olympus Plus camera (Olympus, Tokyo, Japan). For scanning electron microscopic examination We fixed the samples in a fixed solution of 2% formaldehyde, 1.25% glutaraldehyde, and 0.1 M sodium cacodylate buffer at pH 7.2 and 4 °C for 24 h. Following fixation, the samples were dehydrated through graded series of ethanol and critical point dried and then attached to aluminum stubs facing upwards, covered with carbon tabs, and sputtered with gold. The samples were examined and photographed using a JEOL 5300 ISM Scanning Electron Microscope operating at 25 K.V. at the Faculty of Science at Alexandria University.
Five samples were dissected and the roofs were separated from the floors by a scissor and washed with physiological saline (0.9% sodium chloride solution) and the oral cavity was photographed by a digital camera Olympus Plus camera (Olympus, Tokyo, Japan).
We fixed the samples in a fixed solution of 2% formaldehyde, 1.25% glutaraldehyde, and 0.1 M sodium cacodylate buffer at pH 7.2 and 4 °C for 24 h. Following fixation, the samples were dehydrated through graded series of ethanol and critical point dried and then attached to aluminum stubs facing upwards, covered with carbon tabs, and sputtered with gold. The samples were examined and photographed using a JEOL 5300 ISM Scanning Electron Microscope operating at 25 K.V. at the Faculty of Science at Alexandria University.
Gross morphological observations The buccal cavity of Tilapia zillii had a small, narrow, and terminal mouth opening. The buccal cavity consisted of upper and lower jaws bordered by the lower and upper lips ( Fig. a ) . The roof of the buccal cavity consisted of the upper jaw, upper semilunar valve, and palate ( Fig. b ) . The floor of the buccal cavity was composed of the lower jaw, lower semilunar valve, and tongue ( Fig. c ). The upper jaw had premaxillary teeth and the lower jaw had dentary teeth. The premaxillary and dentary teeth were present in two groups; the rostral group and the caudal group. The rostral group was long and presented in one row, while the teeth of the caudal group were short and presented in several rows ( Fig. b / URT&UCT , Fig. c / LRT&LCT) . Scanning electron microscopic observations The roof of the buccal cavity The premaxillary teeth were observed on the upper jaw in two groups: the upper rostral group and the upper caudal group with different lengths that decreased laterally towards the mouth corners. Between the right and left halves of the upper jaw, there was an area devoid of the rostral and caudal groups of premaxillary teeth ( Fig. a ). The upper rostral group of the premaxillary teeth was long and presented in one row with two processes on their tips, medial and lateral processes ( Fig. a /URT). The medial process of the upper rostral teeth was longer than the lateral one. Moreover, the upper caudal group was present in several rows with three processes on their tips, and the teeth weren’t arranged at the same lines ( Fig. a /UCT) . The upper caudal teeth at the corner of the upper jaw were present in two rows only ( Fig. b /UCT) . The area between the rostral and the caudal groups of the premaxillary teeth and the areas between the upper caudal teeth had taste buds at the level of the surface epithelium ( Fig. d &f/TB) . The premaxillary teeth were bordered by the upper lip that had taste buds on highly elevated epithelial protrusions ( Fig. c &e/TB) . The upper valve is semilunar in outline, located caudally to the upper caudal premaxillary teeth ( Fig. a/UV), and had taste buds on highly elevated epithelial protrusions ( Fig. b &c/TB) . The palate had several depressions, and the epithelium appeared like the fish scales ( Fig. d-f ) . The floor of the buccal cavity The dentary teeth were observed on the lower jaw in two groups: the lower rostral group and the lower caudal group with different lengths that decreased laterally. Between the right and left lower rostral groups, there was a median area devoid of these teeth, and between the lower caudal dentary teeth, there was a median elevated area that fit in the area that was devoid of the upper caudal premaxillary teeth ( Fig. a &b). The lower rostral group of the dentary teeth was presented in one row, while the lower caudal group was present in several rows except at the corners of the lower jaw; it was present in one row. The two groups of the dentary had the same appearance as the two groups of the premaxillary teeth ( Fig. d &h) . The dentary teeth were bordered by the lower lip, which had taste buds on highly elevated epithelial protrusions ( Fig. c, e &g/TB) . The areas between the dentary teeth had taste buds on slightly elevated epithelial protrusions ( Fig. f /TB) and taste buds at the level of the surface epithelium ( Fig. i / TB) . The lower valve was semilunar in outline, located caudally to the lower caudal dentary teeth, and had taste buds on highly elevated epithelial protrusions ( Fig. a &c/TB) , taste buds on slightly elevated epithelial protrusions, and taste buds at the level of the surface epithelium ( Fig. b /TB) . The tongue was a true tongue composed of a root, body, and apex. The lingual root was wider than its apex. The lingual apex was rounded with a central depression ( Fig. d ) . The lingual body had a central elevation with two depressions on its sides ( Fig. d &e/Tb) . The lingual surface had several ridges, and it had taste buds on highly elevated epithelial protrusions ( Fig. f &g) . The epithelium of the oral cavity of Tilapia zillii appeared like the fish scales ( Figs. c and g and i & j/blue arrowheads) and it had pores for the mucous glands ( Fig. f and Fig. j /P).
The buccal cavity of Tilapia zillii had a small, narrow, and terminal mouth opening. The buccal cavity consisted of upper and lower jaws bordered by the lower and upper lips ( Fig. a ) . The roof of the buccal cavity consisted of the upper jaw, upper semilunar valve, and palate ( Fig. b ) . The floor of the buccal cavity was composed of the lower jaw, lower semilunar valve, and tongue ( Fig. c ). The upper jaw had premaxillary teeth and the lower jaw had dentary teeth. The premaxillary and dentary teeth were present in two groups; the rostral group and the caudal group. The rostral group was long and presented in one row, while the teeth of the caudal group were short and presented in several rows ( Fig. b / URT&UCT , Fig. c / LRT&LCT) .
The roof of the buccal cavity The premaxillary teeth were observed on the upper jaw in two groups: the upper rostral group and the upper caudal group with different lengths that decreased laterally towards the mouth corners. Between the right and left halves of the upper jaw, there was an area devoid of the rostral and caudal groups of premaxillary teeth ( Fig. a ). The upper rostral group of the premaxillary teeth was long and presented in one row with two processes on their tips, medial and lateral processes ( Fig. a /URT). The medial process of the upper rostral teeth was longer than the lateral one. Moreover, the upper caudal group was present in several rows with three processes on their tips, and the teeth weren’t arranged at the same lines ( Fig. a /UCT) . The upper caudal teeth at the corner of the upper jaw were present in two rows only ( Fig. b /UCT) . The area between the rostral and the caudal groups of the premaxillary teeth and the areas between the upper caudal teeth had taste buds at the level of the surface epithelium ( Fig. d &f/TB) . The premaxillary teeth were bordered by the upper lip that had taste buds on highly elevated epithelial protrusions ( Fig. c &e/TB) . The upper valve is semilunar in outline, located caudally to the upper caudal premaxillary teeth ( Fig. a/UV), and had taste buds on highly elevated epithelial protrusions ( Fig. b &c/TB) . The palate had several depressions, and the epithelium appeared like the fish scales ( Fig. d-f ) . The floor of the buccal cavity The dentary teeth were observed on the lower jaw in two groups: the lower rostral group and the lower caudal group with different lengths that decreased laterally. Between the right and left lower rostral groups, there was a median area devoid of these teeth, and between the lower caudal dentary teeth, there was a median elevated area that fit in the area that was devoid of the upper caudal premaxillary teeth ( Fig. a &b). The lower rostral group of the dentary teeth was presented in one row, while the lower caudal group was present in several rows except at the corners of the lower jaw; it was present in one row. The two groups of the dentary had the same appearance as the two groups of the premaxillary teeth ( Fig. d &h) . The dentary teeth were bordered by the lower lip, which had taste buds on highly elevated epithelial protrusions ( Fig. c, e &g/TB) . The areas between the dentary teeth had taste buds on slightly elevated epithelial protrusions ( Fig. f /TB) and taste buds at the level of the surface epithelium ( Fig. i / TB) . The lower valve was semilunar in outline, located caudally to the lower caudal dentary teeth, and had taste buds on highly elevated epithelial protrusions ( Fig. a &c/TB) , taste buds on slightly elevated epithelial protrusions, and taste buds at the level of the surface epithelium ( Fig. b /TB) . The tongue was a true tongue composed of a root, body, and apex. The lingual root was wider than its apex. The lingual apex was rounded with a central depression ( Fig. d ) . The lingual body had a central elevation with two depressions on its sides ( Fig. d &e/Tb) . The lingual surface had several ridges, and it had taste buds on highly elevated epithelial protrusions ( Fig. f &g) . The epithelium of the oral cavity of Tilapia zillii appeared like the fish scales ( Figs. c and g and i & j/blue arrowheads) and it had pores for the mucous glands ( Fig. f and Fig. j /P).
The premaxillary teeth were observed on the upper jaw in two groups: the upper rostral group and the upper caudal group with different lengths that decreased laterally towards the mouth corners. Between the right and left halves of the upper jaw, there was an area devoid of the rostral and caudal groups of premaxillary teeth ( Fig. a ). The upper rostral group of the premaxillary teeth was long and presented in one row with two processes on their tips, medial and lateral processes ( Fig. a /URT). The medial process of the upper rostral teeth was longer than the lateral one. Moreover, the upper caudal group was present in several rows with three processes on their tips, and the teeth weren’t arranged at the same lines ( Fig. a /UCT) . The upper caudal teeth at the corner of the upper jaw were present in two rows only ( Fig. b /UCT) . The area between the rostral and the caudal groups of the premaxillary teeth and the areas between the upper caudal teeth had taste buds at the level of the surface epithelium ( Fig. d &f/TB) . The premaxillary teeth were bordered by the upper lip that had taste buds on highly elevated epithelial protrusions ( Fig. c &e/TB) . The upper valve is semilunar in outline, located caudally to the upper caudal premaxillary teeth ( Fig. a/UV), and had taste buds on highly elevated epithelial protrusions ( Fig. b &c/TB) . The palate had several depressions, and the epithelium appeared like the fish scales ( Fig. d-f ) .
The dentary teeth were observed on the lower jaw in two groups: the lower rostral group and the lower caudal group with different lengths that decreased laterally. Between the right and left lower rostral groups, there was a median area devoid of these teeth, and between the lower caudal dentary teeth, there was a median elevated area that fit in the area that was devoid of the upper caudal premaxillary teeth ( Fig. a &b). The lower rostral group of the dentary teeth was presented in one row, while the lower caudal group was present in several rows except at the corners of the lower jaw; it was present in one row. The two groups of the dentary had the same appearance as the two groups of the premaxillary teeth ( Fig. d &h) . The dentary teeth were bordered by the lower lip, which had taste buds on highly elevated epithelial protrusions ( Fig. c, e &g/TB) . The areas between the dentary teeth had taste buds on slightly elevated epithelial protrusions ( Fig. f /TB) and taste buds at the level of the surface epithelium ( Fig. i / TB) . The lower valve was semilunar in outline, located caudally to the lower caudal dentary teeth, and had taste buds on highly elevated epithelial protrusions ( Fig. a &c/TB) , taste buds on slightly elevated epithelial protrusions, and taste buds at the level of the surface epithelium ( Fig. b /TB) . The tongue was a true tongue composed of a root, body, and apex. The lingual root was wider than its apex. The lingual apex was rounded with a central depression ( Fig. d ) . The lingual body had a central elevation with two depressions on its sides ( Fig. d &e/Tb) . The lingual surface had several ridges, and it had taste buds on highly elevated epithelial protrusions ( Fig. f &g) . The epithelium of the oral cavity of Tilapia zillii appeared like the fish scales ( Figs. c and g and i & j/blue arrowheads) and it had pores for the mucous glands ( Fig. f and Fig. j /P).
The dentition description, taste buds, mucous cell distribution, and microridge patterns on the oral epithelium along the oral roof and floor of the fish have been recorded by Alsafy , et al. , Alsafy , et al. , Khillare , et al. , Chanu , et al. , Sayed , et al. , Baaoom , Harabawy , et al. , Abumandour , et al. , Abumandour and El-Bakary . The irregularity and distribution of taste buds and mucous cells, in addition to the microridges patterns of the epithelial surface of the oral roof and floor of Tilapia zillii , are considered adaptations to fish feeding behavior and preferences. The variations in the morphology of the teeth and taste buds, epithelial surface, and mucous cells of the oral cavity of different fish species are considered to be adaptations to different food preferences and feeding habits [ , – ]. The present work focused on the morphological features of the oral cavity of the Tilapia zillii with concern to the surface epithelium of the upper and lower valves, palate, and the characteristics of the teeth in the lower and upper jaws to give basic information about the architecture of its oral cavity to help in the formulation of the diet necessary for the fisheries of this fish species. Khallaf and Alne-na-ei reported that the Tilapia zillii is a herbivorous fish, and the current investigation shows that the premaxillary and the dentary teeth of the Tilapia zillii are arranged in two groups: the rostral and caudal groups in the lower and upper jaws. The rostral group is long and has two processes, while the caudal group is short with three processes. So, the different lengths in the premaxillary and dentary teeth in conjunction with the presence of processes on the tips in the rostral and caudal groups of the teeth help to shred the eaten particles like the algae. Additionally, Bonato , et al. reported that the fish may be herbivorous, carnivorous, and omnivorous depending on their feeding habits and dentation. Moreover, Debiais-Thibaud , et al. recorded the presence of different shapes and types of teeth depending on their diets and feeding habits as well as the gene regulatory network involved in tooth morphogenesis that causes evolution in the shape of the teeth. The taste buds are observed at different extents on the oral epithelium of Tilapia zillii ; on highly elevated epithelial protrusions, on slightly elevated epithelial protrusions, and at the same level of the epithelium. These findings come in alignment with those observed by JAKUBOWSKI and WHITEAR , who recognized the presence of these three types of taste buds depending on their dimensions and the extent of protrusions on the surface epithelium. Furthermore, Gamal , et al. reported three types of taste buds; type I is found in relatively high epidermal papillae, type II is mostly found in low epidermal papillae and type III taste buds never rise above the normal epithelial level. The presence of taste buds in the oral cavity of the fish, including their jaws, oral valves, and tongue, confirms its gustatory ability as it plays a role in the selection of the desirable food particles from the undesirable ones. These results coincide with those obtained by Hara ; Kubitza and Lovshin ; Fishelson , et al. ; Yashpal , et al. ; Yashpal , et al. ; Devitsina , et al. ; Abbate , et al. . The current findings reveal that the Tilapia zillii has upper and lower oral valves. The oral valves have a role in the regulation of the water in the oral cavity. The same findings are recorded in different fish species [ – , ]. While Yashpal , et al. in Cirrhinus mrigala recorded the presence of the upper valve only, and Coxon and Davison reported the presence of the cartilaginous valve in New Zealand hagfish ( Eptatretus cirrhatus ). Functionally, Tilapia zillii ’s unique adaptation enables it to control water flow through its mouth, aiding in respiration and feeding, but further research is needed to understand its impact on its survival and behaviour. The current findings reveal that the Tilapia zillii has a true tongue consisting of a root, body, and apex. The same results are recorded by Abbate , et al. in zebrafish Danio rerio , Abbate , et al. in gilthead seabream Sparus aurata, Abbate , et al. in European sea bass Dicentrarchus labrax , Sadeghinezhad , et al. in northern pike ( Esox lucius ) and Alsafy , et al. in Bagrus bayad. While Bullock and Bunton , Genten , et al. , Abumandour and El-Bakary described the tongue as a triangular elevated thickening in the epithelium. Additionally, Genten , et al. reported it as a thickening of the mucosa. The current findings reveal a rounded lingual apex. The same result was reported in Bagrus bayad and S. williamsi . While Fishelson , et al. reported it as a spatula in S. fulva . Moreover, Alsafy , et al. reported a pointed lingual apex in white grouper Epinephelus aeneus. The surface epithelium in different parts of the oral cavity of Tilapia zillii is observed like the fish scales. Sayed , et al. ; MITTAL and MITTAL ; Yashpal , et al. ; El Bakary reported that the presence of microridges with different arrangements has a role in several functions such as secretion, absorption, flexibility, and mechanical protection. The microridges are considered structures that protect the oral cavity against physical abrasions during food movement and swallowing in addition to epithelial protection against the abrasion, and this is enhanced by the secretions of mucous cells that lubricate ingested food particles . These microridges are channels for the passage of mucus . Its distribution in the oral cavity reflected the high secretory activity of the epithelium in fish . Several studies described the microridges as finger-like prints covering the oral epithelium. These observations are seen in D. dentex , in S. dumerilli , in Rita rita , in B. docmak and C. gariepinus , in M. kannume , C. auratus , B. bynni , and S. schall , in O. niloticus . Those structures protect the oral mucosa from mechanical trauma, and the secreted mucous forms a lubricated surface for the passage of food . The mucus is secreted by mucous goblet cells or glands located in the oral cavity. The secreted mucus lubricates the oral epithelium and the ingested food particles to help the smooth food passage so the oral epithelium will be protected from the occurrence of mechanical injury as it adapts the oral epithelium to be a lubricated surface and thus will lead to the protection of the epithelium from the abrasion [ , , , , ]. Also, the mucous can inhibit the proliferation and invasion of pathogenic microorganisms in fish epidermis . The mucous holding on the surface of the cells helps to reserve surface area for stretching or distortion and spreading the mucous outside the goblet cells [ , , , ].
Regarding the herbivorous feeding habit of Tilapia zillii, the morphological characterization of its teeth revealed the presence of rostral and caudal groups of premaxillary and dentary teeth with different lengths and carried processes that help the fish to shred the eaten particles like the algae. Understanding the oral cavity of Tilapia zillii can aid in identifying effective feeding methods in aquaculture. It also offers insights into its evolutionary history and ecological niche, enabling the development of more efficient strategies to optimize fish growth and health.
|
Gesundheitskompetenz messen: Methoden und Instrumente zur Erfassung der allgemeinen Gesundheitskompetenz bei Erwachsenen | a7cb7aaa-1bbd-408c-ab86-6eb8e54efb23 | 11868340 | Health Literacy[mh] | Erste Instrumente zur Messung der Gesundheitskompetenz (GK) wurden in den 1990er-Jahren veröffentlicht, wie zum Beispiel der Test of Functional Health Literacy in Adults (TOFHLA) oder das Rapid Estimate of Adult Literacy in Medicine (REALM; ). Hierbei handelt es sich um Instrumente, die vor allem in den USA im Versorgungskontext eingesetzt wurden. Sie erfassen die GK von Patientinnen und Patienten durch verschiedene objektive Tests, die auf einem funktionalen Verständnis von GK beruhen und sich insbesondere auf das Verstehen schriftlicher medizinischer oder gesundheitsbezogener Informationen beziehen . Später wurden diese Tests vereinzelt auch in allgemeinen Bevölkerungsstudien eingesetzt. Seit diesen Anfängen hat sich jedoch das Verständnis von GK und damit auch die Messung von GK weiterentwickelt. Grob lassen sich 4 Entwicklungsrichtungen feststellen : von der Fokussierung auf Krankheit und Krankheitsbewältigung hin zu Prävention, Gesundheitsförderung und einem umfassenden Gesundheitsverständnis , von einem rein funktionalen Verständnis von GK (Lesen, Schreiben, Rechnen) hin zu interaktiven und kritischen Kompetenzen bzw. Kompetenzen, die das Finden, Verstehen, Bewerten und Anwenden von Gesundheitsinformationen betreffen , von einem individualistischen hin zu einem relationalen Verständnis von GK, das GK nicht als rein individuelle Kompetenz, sondern als Interaktion zwischen individuellen Kompetenzen und den Anforderungen der Informations- und Angebotsumwelt begreift , und von einem allgemeinen Verständnis von GK hin zu spezifischen Aspekten von GK (z. B. psychische GK, digitale GK, Impfkompetenz). Ein erstes umfassendes Instrument zur Messung von GK, die Health Activities Literacy Scale (HALS), wurde in den USA entwickelt und in den USA, Kanada und einigen europäischen Ländern (2003) eingesetzt , danach aber nicht mehr verwendet. Wie TOFHLA und REALM ist es ein leistungsorientiertes Instrument (Test) mit über 190 gesundheitsbezogenen Items. In Europa begann die Messung der GK in der Bevölkerung in der Schweiz mit dem Swiss Health Literacy Survey (HLS-CH) im Jahr 2006 . Der HLS-CH verwendete ein neues, hauptsächlich auf Selbsteinschätzung basierendes Befragungsinstrument, das, wie der HALS, verschiedene Dimensionen der GK berücksichtigte. Die in der Schweiz gemachten Erfahrungen und die durch die HLS-CH-Studie ausgelöste gesundheitspolitische Debatte haben dazu geführt, dass auch die Mitgliedsstaaten der Europäischen Union (EU) Daten zur GK ihrer Bevölkerung haben wollten. Infolge dessen wurde die erste europäische Gesundheitskompetenzstudie – European Health Literacy Survey (HLS-EU) – initiiert und in 8 Ländern durchgeführt . Im Rahmen dieser Studie wurden ein umfassendes Modell allgemeiner GK und ein neues Messinstrument, der European Health Literacy Questionnaire (HLS-EU‑Q; ), entwickelt. Die Ergebnisse der HLS-EU-Studie verdeutlichten, dass ein erheblicher Teil der Bevölkerung Schwierigkeiten hatte, gesundheitsrelevante Informationen zu nutzen. Darüber hinaus wurden ein sozialer Gradient in der GK und Zusammenhänge mit dem Gesundheitsverhalten, dem Gesundheitszustand und der Inanspruchnahme von Gesundheitsleistungen festgestellt . Das internationale Benchmarking der 8 HLS-EU-Länder erregte große Aufmerksamkeit und zeigte, dass GK als wichtiges gesundheitspolitisches Thema wahrgenommen wird, sobald Daten vorliegen. Die Ergebnisse der HLS-EU-Studie führten zu spezifischen gesundheitspolitischen Maßnahmen zur Verbesserung der GK, insbesondere in Österreich und Deutschland . Die HLS-EU Ergebnisse wurden zudem in der WHO-Publikation „Health literacy: the solid facts“ berücksichtigt, in der auch die regelmäßige Durchführung von GK-Messungen empfohlen wird . Diese Empfehlung führte 2018 zur Gründung des WHO Action Network on Measuring Population and Organizational Health Literacy (M-POHL ; ). Mit der zweiten europäischen Gesundheitskompetenzstudie – Health Literacy Survey 2019–2021 (HLS 19 ) – hat M‑POHL dazu beigetragen, dass zur GK der europäischen Bevölkerung mittlerweile Daten aus 17 Ländern der WHO-Europa-Region vorliegen . Im Rahmen der HLS 19 -Studie wurde das HLS-EU-Instrument zur Messung der allgemeinen GK angepasst (HLS 19 -Q47) und eine Kurzform mit 12 Items entwickelt (HLS 19 -Q12; ). Darüber hinaus wurden, dem internationalen Trend folgend, auch spezifische GK abgefragt (siehe Abschnitt „HLS 19 -Q12“; ). Inzwischen gibt es eine Vielzahl von GK-Instrumenten, wobei der Trend zu einer immer differenzierteren Betrachtung für unterschiedliche Aspekte der GK, aber auch für unterschiedliche Bevölkerungsgruppen (z. B. Kinder, Jugendliche, ältere Menschen) ungebrochen scheint. Auch neue Themen, wie zum Beispiel die professionelle GK, werden aufgegriffen . Angesichts dieser Vielschichtigkeit konzentriert sich der vorliegende Beitrag auf die Messung der allgemeinen GK bei Erwachsenen. Zunächst wird ein Überblick über die am häufigsten verwendeten Instrumente gegeben, ergänzt um Hinweise zur Messung spezifischer GK für darüber hinaus interessierte Leserinnen und Leser. In den nachfolgenden Abschnitten werden die derzeit am besten validierten Instrumente zur Messung einer umfassenden allgemeinen GK beschrieben: der Health Literacy Questionnaire (HLQ) und der HLS 19 -Q12-Fragebogen. Zum Schluss wird ein kurzes Fazit gezogen. Ziel dieses Beitrags ist es, einen kompakten Überblick über die Methoden zur Messung der allgemeinen Gesundheitskompetenz bei Erwachsenen zu geben und 2 häufig verwendete und gut validierte Instrumente vorzustellen. So unterschiedlich wie das Verständnis von GK sind auch die Messinstrumente, die für Forschung, Evaluation oder Monitoring zur Verfügung stehen. Seit der ersten Veröffentlichung eines Instruments zur Messung der funktionalen GK, dem REALM , hat sich die Zahl der GK-Instrumente exponentiell vervielfacht. Der Health Literacy Tool Shed , eine Online-Sammlung von GK-Instrumenten, bietet einen ersten Überblick. Gegenwärtig sind darin mehr als 200 Instrumente erfasst, wobei auch Übersetzungen und adaptierte Versionen enthalten sind. Nur ein Teil davon eignet sich zur Messung der allgemeinen GK. Darüber hinaus haben zahlreiche systematische Übersichtsarbeiten die wachsende Zahl an GK-Instrumenten zusammengefasst, verglichen und bewertet. Allein zur Messung der allgemeinen GK gibt es mindestens 12 Übersichtsarbeiten . Ohne eine genaue Zahl nennen zu können, stehen derzeit geschätzt mehr als 50 Instrumente zur Messung der allgemeinen GK zur Verfügung. Sie lassen sich in Instrumente unterteilen, die entweder auf einem funktionalen oder einem umfassenden Verständnis von GK beruhen, und in Instrumente, die performancebasiert (d. h. in Form von Leistungstests) oder erfahrungsbasiert (in Form von Selbstberichten) messen. Häufig findet sich eine Kombination aus „funktionalem Verständnis und Leistungsmessung“ und „umfassendem Verständnis und Selbstbericht“. Zu den am häufigsten genutzten Instrumenten in der Kategorie „funktionales Verständnis und performancebasiert“ gehören der TOFHLA, REALM und NVS (Newest Vital Sign; ), während in der Kategorie „umfassendes Verständnis und erfahrungsbasiert“ der HLQ und der HLS-EU-Q47 , seine Kurzformen (HLS-EU-Q16 und HLS-EU-Q6; ) und die adaptierte Kurzform HLS 19 -Q12 zu nennen sind. Die beiden letztgenannten Instrumente (HLQ und HLS-EU-Q) sind laut einer aktuellen Übersichtsarbeit auch die derzeit am besten validierten Instrumente zur Messung einer umfassenden allgemeinen GK. Neben den Instrumenten zur Messung der allgemeine GK gibt es mittlerweile auch zahlreiche Instrumente, die sich auf spezifische Gesundheitskompetenzen bzw. spezifische Aspekte der GK beziehen. Es handelt sich hier um: GK-Instrumente, die sich auf Patientinnen und Patienten mit bestimmten Erkrankungen beziehen (z. B. Atemwegserkrankungen, Herz-Kreislauf-Erkrankungen, Diabetes, psychische Erkrankungen; ) oder auf die Prävention übertragbarer Erkrankungen (Impfkompetenz; ), lebensstilbezogene Instrumente (z. B. zur Ernährungskompetenz oder zur bewegungsbezogenen GK; ), Instrumente zur digitalen GK/eHealth Literacy oder Instrumente für bestimmte Altersgruppen (Kinder, Jugendliche und Ältere; ). Entsprechende Übersichtsarbeiten werden bei den einzelnen Themenclustern referenziert. Darüber hinaus gibt es noch Übersichtsarbeiten, die sich mit spezifischen Formen der Datenerhebung oder unterschiedlichen methodischen Zugängen in der GK-Messung befassen . Der HLQ ist eines der beiden am besten validierten Instrumente zur Messung einer umfassenden allgemeinen GK und basiert auf der GK-Definition der WHO, wie sie im Health Promotion Glossary 1998 veröffentlicht wurde . Sie verweist auf die kognitiven und sozialen Fähigkeiten, die die Motivation und die Möglichkeiten des Einzelnen bestimmen, auf Informationen zuzugreifen, sie zu verstehen und zu nutzen, um die eigene Gesundheit zu fördern und zu erhalten. Basierend auf einem induktiven Ansatz zur Instrumentenentwicklung wurde von einem Team um Richard Osborne (heute Swinburne University of Technology, Australien) auf der Grundlage einer in Workshops erarbeiteten Sammlung relevanter Aspekte von GK ein multifaktorielles Messinstrument entwickelt, das aus 9 Faktoren („domains“) mit jeweils 4 bis 5 Items (insgesamt 44) besteht. Mit seinen 9 Skalen hat das Instrument vornehmlich den Bereich der Krankheitsbewältigung im Fokus : sich von Gesundheitsdienstleistern verstanden und unterstützt fühlen („feeling understood and supported by healthcare providers“), ausreichende Informationen haben, um meine Gesundheit zu managen („having sufficient information to manage my health“), aktiv meine Gesundheit managen („actively managing my health“), soziale Unterstützung für Gesundheit („social support for health“), Gesundheitsinformationen bewerten („appraisal of health information“), Fähigkeit, sich aktiv mit Leistungserbringern auseinanderzusetzen („ability to actively engage with healthcare providers“), sich im Gesundheitssystem zurechtfinden („navigating the health system“), Fähigkeit, gute Gesundheitsinformationen zu finden („ability to find good quality health information“), Gesundheitsinformationen ausreichend gut verstehen, um zu wissen, was zu tun ist („understanding health information well enough to know what to do“). Die Items der Faktoren 1 bis 5 sind auf einer 4‑stufigen Ratingskala („strongly disagree“ bis „strongly agree“) und die Items der Faktoren 6 bis 9 auf einer 5‑stufigen Ratingskala („cannot do“ bis „very easy“) zu beantworten . Aufgrund der Länge des Instruments wird in einigen Studien nur eine Auswahl der 9 Skalen verwendet (z. B. Bo et al. ; Simpson et al. ). Für jeden Faktor wird ein Score errechnet. Die berechneten Scores sind Summenscores mit unterschiedlichen Wertebereichen je nach Anzahl der Items und Ratingskala. Von der Berechnung eines Gesamtscores wird mit Hinweis auf die Mehrdimensionalität des zugrunde liegenden Konzepts bzw. des Instruments abgesehen. Es werden zudem keine GK-Niveaus („levels“) berechnet. Die Idee einer Einteilung der Bevölkerung in Personen mit guter oder schlechter GK wird grundsätzlich zurückgewiesen. Stattdessen werden auf Basis der 9 Scores GK-Profile erstellt. Der Fokus liegt dabei auf der Kombination von Stärken und Herausforderungen, um darauf aufbauend gezielte Interventionen zu entwickeln, z. B. im Rahmen des Ophelia-Prozesses (Optimising Health Literacy and Access; ). Der HLQ wurde in zahlreichen Studien in verschiedenen Sprachen und Kontexten eingesetzt bzw. validiert . Derzeit liegt das Instrument in 47 Sprachen übersetzt bzw. kulturell adaptiert vor (auch in deutscher Sprache ). 4 weitere Übersetzungen sind in Arbeit. Das Instrument weist eine gute Inhalts- und Kriteriumsvalidität auf, wobei letztere für gesundheitsrelevante Verhaltensweisen, Gesundheitsindikatoren und die Inanspruchnahme professioneller Gesundheitsdienste nachgewiesen wurde . Seine faktorielle Validität wurde in konfirmatorischen Faktorenanalysen (CFA) und vereinzelt auch in Rasch-Analysen bestätigt. Osborne et al. validierten die angenommene Faktorenstruktur mit einem 9‑Faktoren-CFA-Modell und stellten eine gute Modellanpassung fest (CFI = 0,936, TLI = 0,930 und RMSEA = 0,076) . Nolte et al. berichten für die deutsche Übersetzung vergleichbare Werte (CFI = 0,990, RMSEA = 0,048). Mit Cronbachs-Alpha-Koeffizienten um oder über 0,8 weisen alle 9 Skalen zudem eine gute interne Konsistenz auf . In einer weiterführenden Studie wurde außerdem die Eindimensionalität der 9 Skalen mittels Rasch-Analysen bestätigt. Allerdings stellten die Autorinnen und Autoren auch inhaltliche Überschneidungen fest, die sie jedoch als unkritisch beurteilten. Hinweise auf eine zumindest teilweise Überlappung einzelner Faktoren finden sich auch in Osborne et al. sowie Nolte et al. , sodass unter Umständen ein zugrunde liegender, übergeordneter Faktor angenommen werden kann. Dies muss allerdings in weiterführenden Analysen erst final geklärt werden . Die Konvergenzvalidität des HLQ wurde mit Instrumenten zur Messung der funktionalen GK (TOFHLA, NVS) untersucht , die allenfalls schwach mit den HLQ-Skalen korrelieren. Lediglich Faktor 5 (Gesundheitsinformationen bewerten) korreliert mit ρ = −0,28 etwas stärker mit dem NVS, ebenso Faktor 8 (Gesundheitsinformationen finden) und Faktor 9 (Gesundheitsinformationen verstehen) mit ρ = 0,23 bzw. ρ = 0,32 (Spearman-Korrelation) mit dem TOFHLA Reading. In einer rezenten Studie wurden zudem Zusammenhänge mit dem HLS 19 -Q12 (siehe nächster Abschnitt) ermittelt. Die 9 HLQ-Skalen korrelieren positiv im moderaten Bereich mit dem HLS 19 -Q12-Score (zwischen r = 0,24 und r = 0,42; Pearson-Korrelation). Hierbei gilt es jedoch zu beachten, dass sich der HLQ und der HLS 19 -Q12 inhaltlich nur in Teilen überschneiden. Neuübersetzungen des Instruments müssen gemäß der Translation Integrity Procedure validiert werden, die sowohl eine qualitative Validierung mittels kognitiver Interviews als auch eine statistische Validierung empfiehlt. Das Instrument darf außerdem nur mit einer Genehmigung der Swinburne University of Technology verwendet werden. Die Nutzung ist zwar für nicht geförderte wissenschaftliche Forschung sowie für gemeinnützige und nichtkommerzielle Projekte und Organisationen kostenlos, jedoch an Bedingungen geknüpft, wie z. B. das Verbot der Veröffentlichung des verwendeten Fragebogens, was für einige Anwendungsbereiche ein K.-o.-Kriterium darstellen könnte. Ansonsten ist die Nutzung des Instruments mit Kosten verbunden. Der HLQ ist für verschiedene Erhebungsmethoden geeignet (persönliche Interviews, Telefoninterviews, Onlinebefragungen, Paper-Pencil-Befragungen) und einfach in der Handhabung. Die durchschnittliche Bearbeitungszeit beträgt 7–8 min. Der HLQ wird verstärkt in National Health Literacy Demonstration Projects on Non-communicable Diseases (NCDs) im Rahmen des WHO European Action Network on Health Literacy for Prevention and Control of NCDs und in der European Joint Action on Cardiovascular Diseases and Diabetes (JACARDI) eingesetzt. 19 -Q12 Der HLS 19 -Q12, ein Kurzfragebogen zur Messung der allgemeinen GK , stellt das Herzstück der eingangs erwähnten M‑POHL-HLS 19 -Studie und auch der Folgestudie HLS 24 (Health Literacy Survey 2024–2026) dar. Er wurde auf der Basis des HLS 19 -Q47 entwickelt, einer adaptierten Version des HLS-EU-Q47 , der zusammen mit seinen Kurzformen (HLS-EU-Q16 und HLS-EU-Q6; ) zu den am besten validierten Instrumenten zur Messung einer umfassenden allgemeinen GK gehört . Dem HLS 19 -Q12/-Q47 und seinen Vorgängerversionen liegt ein umfassendes GK-Verständnis zugrunde . Es umfasst das Wissen, die Motivation und die Fähigkeiten von Menschen, relevante Gesundheitsinformationen zu finden, zu verstehen, zu bewerten und anzuwenden, um im Alltag Urteile und Entscheidungen in den Bereichen Krankheitsbewältigung, Krankheitsprävention und Gesundheitsförderung zu treffen, die zu mehr Gesundheit und Lebensqualität beitragen . In eine 3 × 4-Matrix übertragen ergeben sich daraus 12 Zellen, die relevante Subdimensionen einer allgemeinen GK definieren (Tab. ). In der Definition und in der Matrix nicht berücksichtigt, aber den HLS-EU- und HLS 19 -Instrumenten inhärent ist der relationale Charakter von GK . Demnach entsteht GK aus dem Zusammenspiel individueller Kompetenzen mit den Anforderungen der Informations- und Angebotsumwelt und der daraus resultierenden Motivation . Die HLS-EU- und HLS 19 -Instrumente fragen daher nach Schwierigkeiten bei der Ausführung von GK-Aufgaben . Der HLS 19 -Q12 ist eine Kurzform des HLS 19 -Q47. Er operationalisiert alle Zellen der 3 × 4-Matrix und umfasst 12 Items, die im Frageformat formuliert sind, um die Befragten direkt anzusprechen und das Verständnis zu erleichtern. Die unpersönliche Frageformulierung („Wie leicht oder schwer, würden Sie sagen, ist es …“) lädt zudem dazu ein, auch über erwartete, aber nicht erlebte Schwierigkeiten zu berichten. Die Antwortkategorien sind als eine voll verbalisierte 4‑stufige Ratingskala mit einer symmetrischen Anzahl von Antwortmöglichkeiten ausgeführt (von „sehr einfach“ bis „sehr schwierig“), um Tendenzen zur Mitte oder ausweichende Antworten („weiß nicht“) zu vermeiden. Dies ermöglicht auch eine einfache und interpretierbare Dichotomisierung der Antwortkategorien. Die einzelnen Items geben konkrete Hinweise auf bestehende Schwierigkeiten in der Bevölkerung. Gleichzeitig kann aus den einzelnen Fragen ein Score errechnet werden. Darüber hinaus wird eine Kategorisierung des Scores in 4 GK-Stufen vorgeschlagen, die den „match“ bzw. „mismatch“ zwischen individuellen Kompetenzen und situativen Anforderungen beschreiben und eine einfache proportionale Charakterisierung der GK in der Bevölkerung anhand der Kategorien inadäquat, problematisch, ausreichend und ausgezeichnet ermöglichen. Der HLS 19 -Q12 wurde im Rahmen der M‑POHL-HLS 19 -Studie für 17 Länder validiert und weist – ebenso wie der HLS-EU-Q47 und seine Kurzformen – eine gute Inhalts- und Kriteriumsvalidität auf, wobei letztere für gesundheitsrelevante Verhaltensweisen, Gesundheitsindikatoren und die Inanspruchnahme professioneller Gesundheitsdienste nachgewiesen wurde . Seine faktorielle Validität wurde durch konfirmatorische Faktorenanalysen (CFA) und vereinzelt auch durch Rasch-Analysen auch über HLS 19 hinaus bestätigt. Das länderweise berechnete einfaktorielle CFA-Modell zeigt in allen Ländern (darunter auch Deutschland und Österreich) eine gute Modellanpassung (CFI ≥ 0,97, TLI ≥ 0,96 und in 16 von 17 Ländern RMSEA-Werte ≤ 0,07). Mit Cronbachs-Alpha-Koeffizienten über 0,8 ist zudem eine gute interne Konsistenz gegeben . Die Konvergenzvalidität des HLS 19 -Q12 wurde bisher nur in der Studie von Liu et al. im Vergleich zum HLQ untersucht. Dabei korreliert der HLS 19 -Q12-Score positiv im moderaten Bereich (zwischen r = 0,24 und r = 0,42; Pearson-Korrelation) mit den 9 HLQ-Skalen. Allerdings ist zu beachten, dass sich der HLS 19 -Q12 und der HLQ inhaltlich nur teilweise überschneiden. Die Kürze des Fragebogens ermöglicht einen flexiblen Einsatz sowohl in Studien als auch in Evaluationen und eignet sich hervorragend für Monitoringzwecke. Sie ermöglicht auch, neben der allgemeinen GK weitere Aspekte der GK zu berücksichtigen. Im Rahmen der HLS 19 -Studie wurden daher optionale Fragensets entwickelt und angeboten. Diese basieren auf den gleichen methodischen Prinzipien wie der HLS 19 -Q12 und können daher hinsichtlich ihrer Ergebnisse mit der allgemeinen GK verglichen werden. So wurden im HLS 19 Daten zur digitalen GK , zur kommunikativen GK , zur navigationalen GK und zur impfbezogenen GK erhoben. In Kombination mit der allgemeinen GK ermöglichen diese spezifischen Instrumente eine umfassende Analyse der GK in der Bevölkerung . Für den M‑POHL Health Literacy Survey 2024–2026 (HLS 24 ) wurden die HLS 19 -Instrumente zur Messung spezifischer GK weiterentwickelt und um ein Instrument zur psychischen GK ergänzt. Der HLS 19 -Q12 wurde bisher in mehr als 30 Sprachen übersetzt, wobei weitere Übersetzungen in Arbeit sind. Er ist für verschiedene Erhebungsmethoden geeignet (persönliche Interviews, Telefoninterviews, Onlinebefragungen, Paper-Pencil-Befragungen) und einfach in der Anwendung. Die durchschnittliche Bearbeitungszeit beträgt etwa 2–3 min. Der HLS 19 -Q12 steht für Forschung, Evaluationen und für die nichtkommerzielle Nutzung, z. B. durch Gesundheitsdienste, zur Verfügung und wird im Rahmen der M‑POHL Health Literacy Surveys und der European Joint Action Prevent Non-Communicable Diseases eingesetzt. Der HLS 19 -Q12 kann kostenlos über das M‑POHL International Coordination Center bezogen werden. GK ist eine zentrale Determinante von Gesundheit, ein wichtiger Hebel für mehr gesundheitliche Chancengerechtigkeit und eine Voraussetzung für selbstbestimmte Entscheidungen in Gesundheitsfragen . Eine geringe GK geht mit einem ungünstigen Gesundheits‑, Risiko-, Präventions- und Krankheitsverhalten, einem schlechteren Gesundheitszustand und einer höheren Sterblichkeit sowie einer inadäquaten und erhöhten Inanspruchnahme des Gesundheitssystems und höheren Kosten in der Krankenbehandlung einher . Sie ist in der Bevölkerung ungleich verteilt und wird infolge ökologischer und gesellschaftlicher Dynamiken (Klimawandel, Naturkatastrophen, Pandemien, alternde Gesellschaft, Digitalisierung und steigende Gesundheitskosten) immer wichtiger. Im Gegensatz zu anderen sozialen Determinanten von Gesundheit ist die GK beeinflussbar, sei es durch Interventionen zur Erhöhung der individuellen Kompetenzen oder durch Maßnahmen, die darauf abzielen, die Anforderungen zur Nutzung gesundheitsrelevanter Informationen und Angebote zu reduzieren . Daten zur GK tragen dazu bei, GK auf die (politische) Agenda zu setzen. In Österreich hat beispielsweise das schlechte Abschneiden in der EU-HLS-Studie dazu geführt, dass der GK ein eigenes Gesundheitsziel gewidmet (gesundheitsziele-oesterreich.at), die Österreichische Plattform GK gegründet (oepgk.at) und die GK nachhaltig in der Gesundheitsreform (Zielsteuerung-Gesundheit) verankert wurde. Die Messung von GK ermöglicht zudem, Herausforderungen und Zielgruppen zu identifizieren und Entwicklungen zu beobachten. Dies ermöglicht, die Planung und Umsetzung gezielter und zielgruppenspezifischer GK-Maßnahmen. Die Auswahl eines geeigneten Messinstruments für Forschung, Evaluation oder Monitoring orientiert sich zunächst am Verständnis von GK. Hier zeigt sich, dass vor allem Instrumente, die ein breites Verständnis von GK operationalisieren, in den Fokus gerückt sind. Hierbei handelt es sich vor allem um Selbsteinschätzungsinstrumente. Im Gegensatz zu performanceorientierten Instrumenten, die auf die funktionale GK fokussieren, erfassen sie alle Aspekte einer umfassenden GK und berücksichtigen teilweise auch das relationale Verständnis von GK. Das oft kritisierte „Rauschen“ in Selbstberichtsdaten verweist in der Regel auf die Selbstwirksamkeit der Befragten und steht der Nützlichkeit der Daten und Ergebnisse nicht entgegen, da Selbsteinschätzungen das eigene Handeln bestimmen und damit alltagsrelevant werden. Bei der Interpretation der Ergebnisse gilt es aber, diese Selbsteinschätzungseffekte zu berücksichtigen. Die Vielzahl der zur Verfügung stehenden Instrumente ermöglicht die gezielte Auswahl eines „passenden“ GK-Instruments, erschwert aber die Bereitstellung einer vergleichbaren Datenbasis zur Generierung von Evidenz. Es empfiehlt sich daher, in Studien, Evaluationen oder für Monitoringzwecke Instrumente zu verwenden, die bereits gut etabliert sind. In Bezug auf die allgemeine GK sind dies der HLS 19 -Q12 bzw. -Q47 und diesbezügliche Vorgängerinstrumente sowie der HLQ. Beide Instrumente sind gut validiert, weitverbreitet und liegen auch in deutscher Sprache vor. Während der HLS 19 -Q12 bzw. -Q47 eine starke Public-Health-Orientierung aufweist, fokussiert der HLQ stärker auf die Krankenbehandlung. Die Tatsache, dass beide Instrumente bereits in zahlreichen Sprachen verfügbar sind und in vielen Ländern eingesetzt wurden, ermöglicht darüber hinaus ihren Einsatz in mehrsprachigen Studien und ggf. den Vergleich kleinerer regionaler Studien oder Evaluationsstudien mit repräsentativen nationalen Daten. |
A genome-wide screen in | b4961303-d02d-43b0-a577-45f3669d52df | 11892859 | Digestive System[mh] | Listeria monocytogenes is a Gram-positive bacterium known to survive and replicate in a variety of environments, including soil, sludge, and in mammalian hosts, where it is the etiologic agent of the severe foodborne illness listeriosis . Humans are exposed to L. monocytogenes multiple times per year, as it contaminates ready-to-eat foods due to its frequent occurrence in food processing facilities . While the incidence of disease is relatively low compared to other foodborne illnesses, the case fatality rate of invasive listeriosis is 15-22%, making L. monocytogenes one of the most deadly bacterial pathogens . In addition, it is estimated that 10-12% of healthy adults exhibit asymptomatic fecal shedding of L. monocytogenes , potentially contributing to the spread of this deadly pathogen . After ingestion of L. monocytogenes -contaminated food, the bacteria first colonize the lumen of the gastrointestinal (GI) tract where they replicate extracellularly, primarily in the cecum and colon . Small mammal models of infection have revealed that a small portion of the L. monocytogenes in the GI tract invade intestinal epithelial cells, replicate intracellularly, and spread to neighboring enterocytes via actin-mediated motility . L. monocytogenes then cross the mucosal barrier and gain access to the lamina propria, Peyer’s patches, and draining mesenteric lymph nodes (MLN). Bacteria disseminate from the GI tract indirectly to the spleen through the MLN and lymphatics, and directly to the liver via the portal vein . Subsequently, 1-5 bacteria migrate from the liver through the hepatic ducts to seed the murine gallbladder, where they replicate extracellularly in the lumen to very high densities . After a meal, the gallbladder contracts and delivers a bolus of bile and bacteria into the small intestines, reseeding the GI tract . Thus, the gallbladder becomes the main reservoir of L. monocytogenes during infection and the primary source of bacteria excreted in the feces . These observations suggest that replication in the gallbladder is important for infection outcomes and potentially pathogen transmission, and yet little is known about the requirements for L. monocytogenes colonization and proliferation in this organ. The gallbladder is a sac-like organ in which bile is stored and concentrated. Bile is composed of bile salts, cholesterol, phospholipids, and the heme degradation products biliverdin and bilirubin, which give bile its characteristic color. Bile acts as an emulsifier, aiding in the digestion of lipids in food and exhibiting antimicrobial activity by damaging microbial membranes, nucleic acids, and proteins . Despite this, some bacteria have evolved methods of bile detoxification that render them tolerant to bile and capable of colonizing the gallbladder, including L. monocytogenes , Salmonella enterica , and Campylobacter jejuni [ – ] . Following foodborne infection, S. enterica replicates in the gallbladder both extracellularly in biofilms on gallstones and intracellularly within epithelial cells . Additionally, fecal shedding from chronic asymptomatic carriers is important for the pathogenesis and transmission of S. enterica, the most famous example of which being Typhoid Mary . In contrast, C. jejuni, a pathogen that infects both livestock and humans, replicates extracellularly and localizes to the mucosal folds of the gallbladder, although the role of gallbladder colonization in disease remains unclear . These infection strategies are distinct from that of L. monocytogenes , which replicates extracellularly in the lumen of the gallbladder . Animal models of infection have been crucial for understanding disease pathogenesis and gallbladder colonization of bacterial pathogens in vivo . Murine infection models are most commonly used, but they pose two major limitations. First, there is a severely restrictive bottleneck in which fewer than five L. monocytogenes initially seed the gallbladder following either oral or intravenous inoculation . The undefined bottlenecks in vivo render the mouse model unsuitable for genetic screening approaches, such as Tn-seq or competitive mixed infections. Second, the murine gallbladder is extremely small, containing only 5 - 15 µL of biofluid , limiting its utility for biochemical studies aimed at identifying the requirements for L. monocytogenes colonization. Guinea pig models of oral infection have been used to assess L. monocytogenes dissemination from the GI tract to peripheral organs , but gallbladder colonization was not assessed in this model. Sheep have been used to study C. jejuni infection of gallbladders in vivo , but these studies are low-throughput and require veterinary surgical expertise to complete . Purified bile salts and reconstituted powdered bile are frequently used to mimic the gallbladder environment in vitro , but it is not clear what concentration and diluent accurately represent gallbladder biofluid. In this study, we sought to identify L. monocytogenes genes required for replication in the mammalian gallbladder using a transposon sequencing (Tn-seq) approach. This technique combines saturating transposon mutagenesis with next-generation sequencing to assess the contribution of every genetic locus in a high-throughput manner . Tn-seq has been used to identify essential genes and genes conditionally essential for survival in a host for several pathogenic bacteria, including Staphylococcus aureus , Vibrio cholerae , Streptococcus pneumoniae , and recently L. monocytogenes . However, the restrictive bottlenecks in mouse models of listeriosis make the use of global genetic approaches to study gallbladder colonization in vivo unfeasible. Here, we performed Tn-seq on L. monocytogenes in ex vivo non-human primate gallbladders and identified 43 genes necessary for survival and replication in this environment and more broadly in the context of a murine model of listeriosis.
An unbiased approach identifies L. monocytogenes genes required for growth and survival in the gallbladder lumen To identify Listeria monocytogenes genes required for gallbladder colonization, we developed a novel model using non-human primate (NHP) gallbladders obtained from the Washington National Primate Research Center Tissue Distribution Program. There are several advantages to NHP organs over conventional murine models. First, NHP gallbladders can be inoculated with bacteria via syringe, eliminating the bottlenecks to colonization encountered during murine infections. Second, NHP organs are larger and contain ~1,000-fold more biofluid than murine gallbladders, which can support more bacterial biomass or be harvested for in vitro assays. Finally, organs were obtained from NHPs at the endpoint of other non-infectious experiments, and therefore no additional animals were sacrificed for these studies. In the development of the gallbladder colonization model, we first assessed whether ex vivo gallbladder biofluid (bile) supports growth of L. monocytogenes. Bile harvested from three independent NHP gallbladders was determined to be sterile and supported exponential growth of L. monocytogenes in vitro . We next injected mid-log L. monocytogenes into the lumen of intact ex vivo gallbladders and monitored bacterial survival over time by removing luminal contents with a syringe and plating to enumerate colony forming units (CFU). Interestingly, we observed consistent reductions in CFU shortly after inoculation, followed by exponential growth that plateaued between 6 and 12 hours post-injection. Additionally, we observed that immersing the organ in medium, such as DMEM, accelerated tissue deterioration. To minimize potential host cell death and preserve organ integrity, we limited the incubation time to 6 hours and conducted subsequent experiments by incubating the organs on dry, sterile petri dishes. After establishing growth conditions for L. monocytogenes in NHP gallbladders, we used this ex vivo model to investigate the L. monocytogenes genes required for survival and growth in the gallbladder lumen using the unbiased global genetic approach of transposon sequencing (Tn-seq). Four NHP gallbladders were inoculated via syringe with a saturated transposon mutant library of L. monocytogenes containing transposon insertion sequences approximately every 25 base pairs . To monitor growth of the mutant library in the organs, samples of luminal contents were collected 30 minutes and 6 hours post-injection. As observed previously with wild type (WT) L. monocytogenes , an initial reduction in CFU at 30 minutes post-injection was followed by exponential growth through 6 hours, with an average doubling time of 46 minutes ( ). This doubling time is similar to that observed for L. monocytogenes growing in rich medium, demonstrating robust growth in this environment. After 6 hours of incubation in the gallbladders, the entire luminal contents were harvested, diluted in brain heart infusion (BHI) broth, and incubated for 2 hours to increase biomass. Bacterial genomic DNA was then isolated and libraries were prepared for Illumina sequencing of the transposon insertion sites ( and ). Using the parameters of a log 2 fold-change less than -1.50 and an adjusted p -value of less than 0.05, mutants in 43 genes were significantly depleted after incubation in the gallbladders compared to the input libraries, indicating that these genes are required for growth or survival in the NHP gallbladder lumen. Genes significantly depleted after incubation in the gallbladder are listed in , categorized by the biological pathways associated with the proteins they encode. Based on the known anti-microbial properties of bile, we expected to identify genes involved in combatting membrane, DNA, and protein stress, as well as genes involved in redox homeostasis. We also expected to identify genes in metabolic pathways essential for L. monocytogenes replication in the gallbladder. In fact, the Tn-seq screen identified genes involved in protein homeostasis ( clpX , encoding a Clp protease ATPase and prsA2 , encoding the PrsA2 chaperone), redox homeostasis ( trxA and yjbH , encoding thioredoxins), and DNA recombination and repair ( xerD and recR , encoding recombinases). We also identified four genes encoding nucleotide transport and metabolism proteins, including the adenylosuccinate synthesis genes encoded by purA and purB , consistent with the importance of purine biosynthesis in surviving bile stress . Ten genes encoding proteins involved in carbohydrate transport and metabolism were identified as depleted, including two phosphoenolpyruvate-dependent phosphotransferase system (PTS) permeases, mptACD and mpoBCD , which are known to import both glucose and mannose . PTSs are multi-protein complexes utilized by many bacteria to import and phosphorylate defined carbohydrates, with the sugar specificity determined by the Enzyme II (EII) complex proteins . While the L. monocytogenes genome encodes 29 complete PTSs , our screen identified Mpt and Mpo as the only EIIs required for growth in the gallbladder ( ). In addition to the PTS permeases, genes encoding regulators that activate transcription of the mpt and mpo operons ( manR and sigL ), and for phosphorylation of the PTS sugars ( ptsI and ptsH, encoding EI and HPr, respectively) were also significantly depleted in the gallbladder condition. Identification of multiple PTS-related operons and their regulators, which lie at distinct genetic loci, suggests that L. monocytogenes imports glucose and/or mannose via Mpt and Mpo for growth in the gallbladder ( ). The Tn-seq also identified genes encoding proteins involved in coenzyme metabolism ( panCD ), amino acid metabolism ( ansB, glynA ), and energy production. In fact, 8 of the 9 genes encoding the F-type ATP synthase were depleted after growth in the gallbladders. Overall, analysis of our screen identified multiple operons as depleted after growth in the NHP gallbladder, indicating that the screen was robust. Importantly, many genes identified here have not been previously implicated in the context of infection. Genes identified by Tn-seq in the NHP gallbladder contribute to replication in bile in vitro To investigate the roles of the identified genes in L. monocytogenes physiology and pathogenesis, we used allelic exchange techniques to generate nine deletion mutants, representative of 19 genes identified in our screen ( ). These genes were chosen to represent the biological categories most depleted after growth in the gallbladder, including: protein and redox homeostasis, nucleotide transport and metabolism, carbohydrate transport and metabolism, energy production, and regulation ( ). To evaluate the PTS permeases, the entire mptACD (∆ mpt ) or mpoABCD (∆ mpo ) operons were deleted and a double mutant lacking both operons ( ∆mpt∆mpo ) was generated. To evaluate the F-type ATP synthase, the atpB open reading frame was deleted, which eliminates functionality of the entire complex . The mutants were then grown individually either in BHI broth or NHP bile in 96-well plates, and CFU were enumerated after 0.5 and 6 hours of static incubation. Because the oxygen status of the gallbladder lumen is not known, L. monocytogenes growth was evaluated in both aerobic and anaerobic conditions. Several mutants exhibited general growth defects and replicated significantly less than WT in rich medium, including ∆ ccpA , ∆ ptsI , and ∆ trxA ( and ). The ∆ atpB strain had the most striking phenotype in BHI, displaying a slight ~4-fold reduction in CFU in the presence of oxygen and a complete lack of growth in anaerobic conditions ( ). This is consistent with a previous report documenting that the F-type ATPase is essential for L. monocytogenes anaerobic growth . We additionally measured growth in BHI in shaking flasks to assess growth in maximally aerated conditions in rich medium. Under these growth conditions, the ∆ atpB mutant exhibited significantly decreased CFU at the earliest time point, while ∆ trxB was attenuated for growth at 6 and 8 hours post-inoculation ( and ). Although the relevant oxygen levels during infection are unknown, these data provide a broad view of mutant growth in vitro under varying oxygen levels and reveal that ∆ atpB and ∆ trxA are generally impaired for growth in rich medium. When incubated in bile under anaerobic conditions, all mutants were significantly impaired, with the exception of ∆ ccpA ( ). This decrease in CFU at 6 hours was not driven by differences in killing early after inoculation, as all strains exhibited similar reductions in CFU at 30 minutes ( ). Interestingly, some mutants exhibited an oxygen-dependent phenotype. For example, ∆ atpB , ∆ mpt , and ∆ mpt ∆ mpo grew similarly to WT in bile under aerobic conditions ( ), but displayed reduced growth under anaerobic conditions ( ). Together, these results demonstrated that the genes identified by Tn-seq as important during growth in the NHP gallbladder lumen are also required for growth in bile in vitro , even in the absence of competing strains. Genes identified by Tn-seq as critical for survival in the gallbladder lumen also contribute to intracellular fitness Tn-seq identified L. monocytogenes genes required for growth in the gallbladder lumen, which represents one of the extracellular environments that L. monocytogenes encounters during infection. While extracellular niches of infection remain largely uncharacterized, the determinants of intracellular infection and their roles in systemic disease have been thoroughly described. The intracellular lifecycle begins with L. monocytogenes entering host cells via phagocytosis or receptor-mediated endocytosis, vacuolar escape, followed by cytosolic replication and intercellular spread to neighboring cells via actin-dependent motility . To determine if the genes we identified also have important roles in the intracellular lifecycle, we assessed cell-to-cell spread and cytosolic replication of each of the L. monocytogenes mutants in cell culture . The intracellular lifecycle was first evaluated via plaque assay in which a monolayer of cells is infected with L. monocytogenes and then immobilized in agarose containing gentamicin to prevent extracellular growth. Three days post-infection, the live cells are stained and the zones of clearance formed by L. monocytogenes are measured as an indicator of intracellular growth and intercellular spread. In this assay, most mutants formed significantly smaller plaques than those formed by WT, while mutants lacking the PTS operons mpt and mpo formed plaques similar in size to WT ( ). Notably, infections with ∆atpB resulted in no visible plaque formation. We hypothesize this is due to the ∆ atpB requirement for oxygen, which may be limiting in cells with the agarose overlay. Plaque areas were also measured after infection with the complemented strains, in which each deleted gene was expressed from its native promoter at an ectopic site on the chromosome. With one exception, complementation restored plaque areas to WT levels ( ). The ptsI complemented strain produced even smaller plaque areas than ∆ ptsI, although this strain did restore other ∆ ptsI growth defects, as discussed below. The plaque assay measures both cytosolic replication and cell-to-cell spread. To identify the role of each gene in intracellular growth, we measured replication kinetics in primary bone marrow-derived macrophages (BMDMs) over 8 hours. The ∆purB mutant exhibited the most dramatic phenotype as it did not replicate in the host cytosol ( ). Several additional mutants displayed attenuated intracellular growth, including ∆ccpA, ∆ptsI , and ∆trxA ( ). Intracellular growth was fully restored in the complemented strains, including ∆ ptsI + ptsI ( ). Despite defects in plaque formation, both clpX::Tn and ∆atpB grew similarly to WT in BMDMs, indicating that these mutants are defective specifically in the cell-to-cell spread stage of the intracellular lifecycle. Finally, strains lacking the PTS operons (∆mpt, ∆mpo, ∆mpt∆mpo ) displayed no defects in cytosolic growth, consistent with these strains forming WT-sized plaques ( ). Together, these results indicated that although we identified these genes using the selective pressure of extracellular growth in a mammalian organ, many also contribute to intracellular infection. However, the PTS operons were found to be dispensable during intracellular infection, consistent with prior work . L. monocytogenes genes important in the NHP gallbladder are required for oral infection of mice. The Tn-seq screen identified many genes important for extracellular replication in NHP gallbladders as well as intracellular growth and intercellular spread in murine cells. Thus, we hypothesized that these genes would be important for virulence in a mouse model of oral listeriosis. For these infections, 6-7 week old female BALB/c mice were given streptomycin in their drinking water for 2 days and fasted for 16 hours prior to infection to increase susceptibility to oral infection [ , , ]. Mice were then fed 10 8 CFU of each L. monocytogenes strain via pipette. Body weights were recorded daily as a measurement of global disease severity. Mice infected with WT lost nearly 20% of their initial body weight throughout the 4 day infection, whereas mice infected with most of the mutant strains exhibited significantly less weight loss ( and ). Notably, mice infected with ∆ ccpA , ∆ mpo , or ∆ mpt ∆ mpo lost approximately the same amount of weight as mice infected with WT, suggesting that that these genes may not be required for L. monocytogenes pathogenesis in vivo . Conversely, mice infected with ∆ptsI , ∆trxA , and ∆purB lost very little weight over the 4 day infection, suggesting that these strains were severely attenuated in their pathogenicity. To assess bacterial burdens throughout infection, mice were euthanized and CFU were enumerated from organs at both 1 and 4 days post-infection (dpi). After ingestion, L. monocytogenes in the GI tract disseminates via the portal vein to the liver and subsequently the gallbladder. At 1 dpi, bacterial burdens in the livers and gallbladders were similar between WT and all mutant strains, indicating that these genes are not required for dissemination from the GI tract to the liver or gallbladder ( and ). By 4 dpi, 7 of the 9 mutants displayed significantly decreased bacterial burdens in the gallbladder compared to WT ( ). Bacterial burdens in mice infected with ∆ptsI , ∆purB , and clpX::Tn were decreased by more than 300,000-fold compared to mice infected with WT. In contrast, bacterial loads of mice infected with ∆ trxA , ∆atpB, ∆mpt , ∆mpo , and ∆mpt∆mpo displayed more variability in CFU between animals and were decreased by 174- to over 38,000-fold compared to mice infected with WT. Interestingly, all mutants were significantly attenuated in the livers at 4 dpi, with the exception of ∆ ccpA ( ). These data demonstrate that the majority of the L. monocytogenes genes identified by Tn-seq as important for colonization of NHP gallbladders ex vivo were also required for infection of murine gallbladders and livers in vivo. In addition to disseminating directly to the liver via the portal vein, L. monocytogenes disseminates from the GI tract via the lymphatics through the MLN and to the spleen . Most mutants colonized the MLN and displayed similar bacterial burdens as WT at both 1 and 4 dpi, with the exception of ∆ptsI , ∆purB , and ∆atpB , which were significantly attenuated in the MLN compared to WT ( and ). In the spleens, only ∆ptsI, ∆purB, ∆mpt, and clpX::Tn were significantly attenuated at 4 dpi compared to WT ( ). Bacterial burdens in the feces were also enumerated as a measure of L. monocytogenes colonization in the lower GI tract lumen. Bacterial burdens in the feces were similar between most mutants and WT at 1 dpi, whereas the majority of mutants exhibited significantly decreased bacterial loads compared to WT at 4 dpi ( and ). The notable exception is ∆ purB , which was decreased ~150-fold compared to WT in the feces at 1 dpi, but similar to WT by 4 dpi. These data collectively demonstrate that the genes identified in our ex vivo screen contribute to infection of multiple organs, including the gallbladder, following oral infection of mice.
L. monocytogenes genes required for growth and survival in the gallbladder lumen To identify Listeria monocytogenes genes required for gallbladder colonization, we developed a novel model using non-human primate (NHP) gallbladders obtained from the Washington National Primate Research Center Tissue Distribution Program. There are several advantages to NHP organs over conventional murine models. First, NHP gallbladders can be inoculated with bacteria via syringe, eliminating the bottlenecks to colonization encountered during murine infections. Second, NHP organs are larger and contain ~1,000-fold more biofluid than murine gallbladders, which can support more bacterial biomass or be harvested for in vitro assays. Finally, organs were obtained from NHPs at the endpoint of other non-infectious experiments, and therefore no additional animals were sacrificed for these studies. In the development of the gallbladder colonization model, we first assessed whether ex vivo gallbladder biofluid (bile) supports growth of L. monocytogenes. Bile harvested from three independent NHP gallbladders was determined to be sterile and supported exponential growth of L. monocytogenes in vitro . We next injected mid-log L. monocytogenes into the lumen of intact ex vivo gallbladders and monitored bacterial survival over time by removing luminal contents with a syringe and plating to enumerate colony forming units (CFU). Interestingly, we observed consistent reductions in CFU shortly after inoculation, followed by exponential growth that plateaued between 6 and 12 hours post-injection. Additionally, we observed that immersing the organ in medium, such as DMEM, accelerated tissue deterioration. To minimize potential host cell death and preserve organ integrity, we limited the incubation time to 6 hours and conducted subsequent experiments by incubating the organs on dry, sterile petri dishes. After establishing growth conditions for L. monocytogenes in NHP gallbladders, we used this ex vivo model to investigate the L. monocytogenes genes required for survival and growth in the gallbladder lumen using the unbiased global genetic approach of transposon sequencing (Tn-seq). Four NHP gallbladders were inoculated via syringe with a saturated transposon mutant library of L. monocytogenes containing transposon insertion sequences approximately every 25 base pairs . To monitor growth of the mutant library in the organs, samples of luminal contents were collected 30 minutes and 6 hours post-injection. As observed previously with wild type (WT) L. monocytogenes , an initial reduction in CFU at 30 minutes post-injection was followed by exponential growth through 6 hours, with an average doubling time of 46 minutes ( ). This doubling time is similar to that observed for L. monocytogenes growing in rich medium, demonstrating robust growth in this environment. After 6 hours of incubation in the gallbladders, the entire luminal contents were harvested, diluted in brain heart infusion (BHI) broth, and incubated for 2 hours to increase biomass. Bacterial genomic DNA was then isolated and libraries were prepared for Illumina sequencing of the transposon insertion sites ( and ). Using the parameters of a log 2 fold-change less than -1.50 and an adjusted p -value of less than 0.05, mutants in 43 genes were significantly depleted after incubation in the gallbladders compared to the input libraries, indicating that these genes are required for growth or survival in the NHP gallbladder lumen. Genes significantly depleted after incubation in the gallbladder are listed in , categorized by the biological pathways associated with the proteins they encode. Based on the known anti-microbial properties of bile, we expected to identify genes involved in combatting membrane, DNA, and protein stress, as well as genes involved in redox homeostasis. We also expected to identify genes in metabolic pathways essential for L. monocytogenes replication in the gallbladder. In fact, the Tn-seq screen identified genes involved in protein homeostasis ( clpX , encoding a Clp protease ATPase and prsA2 , encoding the PrsA2 chaperone), redox homeostasis ( trxA and yjbH , encoding thioredoxins), and DNA recombination and repair ( xerD and recR , encoding recombinases). We also identified four genes encoding nucleotide transport and metabolism proteins, including the adenylosuccinate synthesis genes encoded by purA and purB , consistent with the importance of purine biosynthesis in surviving bile stress . Ten genes encoding proteins involved in carbohydrate transport and metabolism were identified as depleted, including two phosphoenolpyruvate-dependent phosphotransferase system (PTS) permeases, mptACD and mpoBCD , which are known to import both glucose and mannose . PTSs are multi-protein complexes utilized by many bacteria to import and phosphorylate defined carbohydrates, with the sugar specificity determined by the Enzyme II (EII) complex proteins . While the L. monocytogenes genome encodes 29 complete PTSs , our screen identified Mpt and Mpo as the only EIIs required for growth in the gallbladder ( ). In addition to the PTS permeases, genes encoding regulators that activate transcription of the mpt and mpo operons ( manR and sigL ), and for phosphorylation of the PTS sugars ( ptsI and ptsH, encoding EI and HPr, respectively) were also significantly depleted in the gallbladder condition. Identification of multiple PTS-related operons and their regulators, which lie at distinct genetic loci, suggests that L. monocytogenes imports glucose and/or mannose via Mpt and Mpo for growth in the gallbladder ( ). The Tn-seq also identified genes encoding proteins involved in coenzyme metabolism ( panCD ), amino acid metabolism ( ansB, glynA ), and energy production. In fact, 8 of the 9 genes encoding the F-type ATP synthase were depleted after growth in the gallbladders. Overall, analysis of our screen identified multiple operons as depleted after growth in the NHP gallbladder, indicating that the screen was robust. Importantly, many genes identified here have not been previously implicated in the context of infection.
in vitro To investigate the roles of the identified genes in L. monocytogenes physiology and pathogenesis, we used allelic exchange techniques to generate nine deletion mutants, representative of 19 genes identified in our screen ( ). These genes were chosen to represent the biological categories most depleted after growth in the gallbladder, including: protein and redox homeostasis, nucleotide transport and metabolism, carbohydrate transport and metabolism, energy production, and regulation ( ). To evaluate the PTS permeases, the entire mptACD (∆ mpt ) or mpoABCD (∆ mpo ) operons were deleted and a double mutant lacking both operons ( ∆mpt∆mpo ) was generated. To evaluate the F-type ATP synthase, the atpB open reading frame was deleted, which eliminates functionality of the entire complex . The mutants were then grown individually either in BHI broth or NHP bile in 96-well plates, and CFU were enumerated after 0.5 and 6 hours of static incubation. Because the oxygen status of the gallbladder lumen is not known, L. monocytogenes growth was evaluated in both aerobic and anaerobic conditions. Several mutants exhibited general growth defects and replicated significantly less than WT in rich medium, including ∆ ccpA , ∆ ptsI , and ∆ trxA ( and ). The ∆ atpB strain had the most striking phenotype in BHI, displaying a slight ~4-fold reduction in CFU in the presence of oxygen and a complete lack of growth in anaerobic conditions ( ). This is consistent with a previous report documenting that the F-type ATPase is essential for L. monocytogenes anaerobic growth . We additionally measured growth in BHI in shaking flasks to assess growth in maximally aerated conditions in rich medium. Under these growth conditions, the ∆ atpB mutant exhibited significantly decreased CFU at the earliest time point, while ∆ trxB was attenuated for growth at 6 and 8 hours post-inoculation ( and ). Although the relevant oxygen levels during infection are unknown, these data provide a broad view of mutant growth in vitro under varying oxygen levels and reveal that ∆ atpB and ∆ trxA are generally impaired for growth in rich medium. When incubated in bile under anaerobic conditions, all mutants were significantly impaired, with the exception of ∆ ccpA ( ). This decrease in CFU at 6 hours was not driven by differences in killing early after inoculation, as all strains exhibited similar reductions in CFU at 30 minutes ( ). Interestingly, some mutants exhibited an oxygen-dependent phenotype. For example, ∆ atpB , ∆ mpt , and ∆ mpt ∆ mpo grew similarly to WT in bile under aerobic conditions ( ), but displayed reduced growth under anaerobic conditions ( ). Together, these results demonstrated that the genes identified by Tn-seq as important during growth in the NHP gallbladder lumen are also required for growth in bile in vitro , even in the absence of competing strains.
Tn-seq identified L. monocytogenes genes required for growth in the gallbladder lumen, which represents one of the extracellular environments that L. monocytogenes encounters during infection. While extracellular niches of infection remain largely uncharacterized, the determinants of intracellular infection and their roles in systemic disease have been thoroughly described. The intracellular lifecycle begins with L. monocytogenes entering host cells via phagocytosis or receptor-mediated endocytosis, vacuolar escape, followed by cytosolic replication and intercellular spread to neighboring cells via actin-dependent motility . To determine if the genes we identified also have important roles in the intracellular lifecycle, we assessed cell-to-cell spread and cytosolic replication of each of the L. monocytogenes mutants in cell culture . The intracellular lifecycle was first evaluated via plaque assay in which a monolayer of cells is infected with L. monocytogenes and then immobilized in agarose containing gentamicin to prevent extracellular growth. Three days post-infection, the live cells are stained and the zones of clearance formed by L. monocytogenes are measured as an indicator of intracellular growth and intercellular spread. In this assay, most mutants formed significantly smaller plaques than those formed by WT, while mutants lacking the PTS operons mpt and mpo formed plaques similar in size to WT ( ). Notably, infections with ∆atpB resulted in no visible plaque formation. We hypothesize this is due to the ∆ atpB requirement for oxygen, which may be limiting in cells with the agarose overlay. Plaque areas were also measured after infection with the complemented strains, in which each deleted gene was expressed from its native promoter at an ectopic site on the chromosome. With one exception, complementation restored plaque areas to WT levels ( ). The ptsI complemented strain produced even smaller plaque areas than ∆ ptsI, although this strain did restore other ∆ ptsI growth defects, as discussed below. The plaque assay measures both cytosolic replication and cell-to-cell spread. To identify the role of each gene in intracellular growth, we measured replication kinetics in primary bone marrow-derived macrophages (BMDMs) over 8 hours. The ∆purB mutant exhibited the most dramatic phenotype as it did not replicate in the host cytosol ( ). Several additional mutants displayed attenuated intracellular growth, including ∆ccpA, ∆ptsI , and ∆trxA ( ). Intracellular growth was fully restored in the complemented strains, including ∆ ptsI + ptsI ( ). Despite defects in plaque formation, both clpX::Tn and ∆atpB grew similarly to WT in BMDMs, indicating that these mutants are defective specifically in the cell-to-cell spread stage of the intracellular lifecycle. Finally, strains lacking the PTS operons (∆mpt, ∆mpo, ∆mpt∆mpo ) displayed no defects in cytosolic growth, consistent with these strains forming WT-sized plaques ( ). Together, these results indicated that although we identified these genes using the selective pressure of extracellular growth in a mammalian organ, many also contribute to intracellular infection. However, the PTS operons were found to be dispensable during intracellular infection, consistent with prior work . L. monocytogenes genes important in the NHP gallbladder are required for oral infection of mice. The Tn-seq screen identified many genes important for extracellular replication in NHP gallbladders as well as intracellular growth and intercellular spread in murine cells. Thus, we hypothesized that these genes would be important for virulence in a mouse model of oral listeriosis. For these infections, 6-7 week old female BALB/c mice were given streptomycin in their drinking water for 2 days and fasted for 16 hours prior to infection to increase susceptibility to oral infection [ , , ]. Mice were then fed 10 8 CFU of each L. monocytogenes strain via pipette. Body weights were recorded daily as a measurement of global disease severity. Mice infected with WT lost nearly 20% of their initial body weight throughout the 4 day infection, whereas mice infected with most of the mutant strains exhibited significantly less weight loss ( and ). Notably, mice infected with ∆ ccpA , ∆ mpo , or ∆ mpt ∆ mpo lost approximately the same amount of weight as mice infected with WT, suggesting that that these genes may not be required for L. monocytogenes pathogenesis in vivo . Conversely, mice infected with ∆ptsI , ∆trxA , and ∆purB lost very little weight over the 4 day infection, suggesting that these strains were severely attenuated in their pathogenicity. To assess bacterial burdens throughout infection, mice were euthanized and CFU were enumerated from organs at both 1 and 4 days post-infection (dpi). After ingestion, L. monocytogenes in the GI tract disseminates via the portal vein to the liver and subsequently the gallbladder. At 1 dpi, bacterial burdens in the livers and gallbladders were similar between WT and all mutant strains, indicating that these genes are not required for dissemination from the GI tract to the liver or gallbladder ( and ). By 4 dpi, 7 of the 9 mutants displayed significantly decreased bacterial burdens in the gallbladder compared to WT ( ). Bacterial burdens in mice infected with ∆ptsI , ∆purB , and clpX::Tn were decreased by more than 300,000-fold compared to mice infected with WT. In contrast, bacterial loads of mice infected with ∆ trxA , ∆atpB, ∆mpt , ∆mpo , and ∆mpt∆mpo displayed more variability in CFU between animals and were decreased by 174- to over 38,000-fold compared to mice infected with WT. Interestingly, all mutants were significantly attenuated in the livers at 4 dpi, with the exception of ∆ ccpA ( ). These data demonstrate that the majority of the L. monocytogenes genes identified by Tn-seq as important for colonization of NHP gallbladders ex vivo were also required for infection of murine gallbladders and livers in vivo. In addition to disseminating directly to the liver via the portal vein, L. monocytogenes disseminates from the GI tract via the lymphatics through the MLN and to the spleen . Most mutants colonized the MLN and displayed similar bacterial burdens as WT at both 1 and 4 dpi, with the exception of ∆ptsI , ∆purB , and ∆atpB , which were significantly attenuated in the MLN compared to WT ( and ). In the spleens, only ∆ptsI, ∆purB, ∆mpt, and clpX::Tn were significantly attenuated at 4 dpi compared to WT ( ). Bacterial burdens in the feces were also enumerated as a measure of L. monocytogenes colonization in the lower GI tract lumen. Bacterial burdens in the feces were similar between most mutants and WT at 1 dpi, whereas the majority of mutants exhibited significantly decreased bacterial loads compared to WT at 4 dpi ( and ). The notable exception is ∆ purB , which was decreased ~150-fold compared to WT in the feces at 1 dpi, but similar to WT by 4 dpi. These data collectively demonstrate that the genes identified in our ex vivo screen contribute to infection of multiple organs, including the gallbladder, following oral infection of mice.
genes important in the NHP gallbladder are required for oral infection of mice. The Tn-seq screen identified many genes important for extracellular replication in NHP gallbladders as well as intracellular growth and intercellular spread in murine cells. Thus, we hypothesized that these genes would be important for virulence in a mouse model of oral listeriosis. For these infections, 6-7 week old female BALB/c mice were given streptomycin in their drinking water for 2 days and fasted for 16 hours prior to infection to increase susceptibility to oral infection [ , , ]. Mice were then fed 10 8 CFU of each L. monocytogenes strain via pipette. Body weights were recorded daily as a measurement of global disease severity. Mice infected with WT lost nearly 20% of their initial body weight throughout the 4 day infection, whereas mice infected with most of the mutant strains exhibited significantly less weight loss ( and ). Notably, mice infected with ∆ ccpA , ∆ mpo , or ∆ mpt ∆ mpo lost approximately the same amount of weight as mice infected with WT, suggesting that that these genes may not be required for L. monocytogenes pathogenesis in vivo . Conversely, mice infected with ∆ptsI , ∆trxA , and ∆purB lost very little weight over the 4 day infection, suggesting that these strains were severely attenuated in their pathogenicity. To assess bacterial burdens throughout infection, mice were euthanized and CFU were enumerated from organs at both 1 and 4 days post-infection (dpi). After ingestion, L. monocytogenes in the GI tract disseminates via the portal vein to the liver and subsequently the gallbladder. At 1 dpi, bacterial burdens in the livers and gallbladders were similar between WT and all mutant strains, indicating that these genes are not required for dissemination from the GI tract to the liver or gallbladder ( and ). By 4 dpi, 7 of the 9 mutants displayed significantly decreased bacterial burdens in the gallbladder compared to WT ( ). Bacterial burdens in mice infected with ∆ptsI , ∆purB , and clpX::Tn were decreased by more than 300,000-fold compared to mice infected with WT. In contrast, bacterial loads of mice infected with ∆ trxA , ∆atpB, ∆mpt , ∆mpo , and ∆mpt∆mpo displayed more variability in CFU between animals and were decreased by 174- to over 38,000-fold compared to mice infected with WT. Interestingly, all mutants were significantly attenuated in the livers at 4 dpi, with the exception of ∆ ccpA ( ). These data demonstrate that the majority of the L. monocytogenes genes identified by Tn-seq as important for colonization of NHP gallbladders ex vivo were also required for infection of murine gallbladders and livers in vivo. In addition to disseminating directly to the liver via the portal vein, L. monocytogenes disseminates from the GI tract via the lymphatics through the MLN and to the spleen . Most mutants colonized the MLN and displayed similar bacterial burdens as WT at both 1 and 4 dpi, with the exception of ∆ptsI , ∆purB , and ∆atpB , which were significantly attenuated in the MLN compared to WT ( and ). In the spleens, only ∆ptsI, ∆purB, ∆mpt, and clpX::Tn were significantly attenuated at 4 dpi compared to WT ( ). Bacterial burdens in the feces were also enumerated as a measure of L. monocytogenes colonization in the lower GI tract lumen. Bacterial burdens in the feces were similar between most mutants and WT at 1 dpi, whereas the majority of mutants exhibited significantly decreased bacterial loads compared to WT at 4 dpi ( and ). The notable exception is ∆ purB , which was decreased ~150-fold compared to WT in the feces at 1 dpi, but similar to WT by 4 dpi. These data collectively demonstrate that the genes identified in our ex vivo screen contribute to infection of multiple organs, including the gallbladder, following oral infection of mice.
In this study we sought to identify L. monocytogenes genes important for infection of the mammalian gallbladder. To this end, we developed an ex vivo bacterial colonization model of NHP gallbladders and performed Tn-seq to determine the genes necessary for growth and survival in this environment. This unbiased global genetic approach identified mutants in 43 genes that were significantly depleted after growth in the gallbladder condition, including some genes known to be important for virulence and others not previously studied in the context of infection. Several mutants identified by Tn-seq had growth defects in rich medium and most were predictably attenuated for growth in NHP bile in vitro . Many mutants also had defects in the intracellular lifecycle, including cytosolic growth and cell-to-cell spread, with the notable exception of the PTS permeases. A murine model of oral L. monocytogenes infection revealed that nearly all identified genes are required for full virulence. Together, these data identified genes that are important for L. monocytogenes infection of a mammal and, interestingly, not all are required for intracellular replication or intercellular spread. Animal models have shed light on the importance of gallbladder colonization during L. monocytogenes pathogenesis. Murine models of infection demonstrated that L. monocytogenes replicates extracellularly in the gallbladder lumen to high bacterial densities and that this population can become the primary bacterial reservoir and source of fecally shed L. monocytogenes [ , , ]. Further, they revealed the presence of an uncharacterized severe within-host bottleneck in which the founding population of the gallbladder is limited to approximately 3 bacteria . Dowd et al. demonstrated that L. monocytogenes readily grows in ex vivo porcine gallbladders and the extracted bile . Given the limitations of murine infection models for examining bacterial gallbladder colonization, and inspired by Dowd’s use of ex vivo organs as incubators, we established a new infection model with a less restrictive bottleneck that is also amenable to Tn-seq analysis. While the present study focused on L. monocytogenes luminal growth within the organs, the ex vivo NHP gallbladder model could be utilized to measure mucosal and epithelial colonization of a variety of gallbladder-tropic pathogens. Our Tn-seq screen identified mutants in dozens of genes that were significantly depleted after incubation in the NHP gallbladders. These genes include those involved in redox regulation ( trxA, yjbH, rex ), cell wall modifications ( walK ), and protein stability ( clpX ), consistent with the known antimicrobial activities of bile which result in cell envelope stress and protein damage . Purine biosynthesis was previously identified as important for growth in porcine bile and here, we identified purB as essential for replication in the gallbladder . Interestingly, the screen did not identify the known bile resistance genes mdrT , bsh , bilE , or sigB as required for growth or survival in the gallbladder [ – ]. MdrT is a multidrug resistance transporter originally identified to secrete c-di-AMP and subsequently suggested to be an efflux pump for cholic acid , a component of bile. The bsh gene encoding b ile s alt h ydrolase was originally described as required for survival in bile in vitro and for intestinal persistence in a guinea pig model of infection . The bil e e xclusion system encoded by bilE was proposed to be a transporter that protected L. monocytogenes from toxicity induced by 30% reconstituted bovine bile . SigB is a stress response alternative sigma factor that positively regulates both bsh and bilE . It is now appreciated that bsh, bilE , and sigB confer resistance to acidified bile acids, as may be found in the small intestine, but are not necessary to detoxify bile at neutral pH, as is found in the gallbladder lumen . Furthermore, BilE was recently renamed EgtU when it was conclusively demonstrated that it specifically binds and transports the low molecular weight thiol ergothioneine . Some genes we identified by Tn-seq had been previously described as necessary for virulence in vivo, while many others had not been studied in the context of infection. The purB, trxA, yjbH, clpX , and rex genes are known to be required for full virulence, though some of the studies used different mouse strains and different inoculation methods [ – ]. Moreover, rex is one of the few L. monocytogenes genes specifically required for replication in the murine gallbladder . Conversely, we identified genes encoding all 8 structural components of the F-type ATP synthase, which was not previously examined in vivo . We also identified operons encoding two PTS EII complexes ( mpt, mpo ), as well as genes encoding their transcriptional ( sigL, manR ) and post-transcriptional regulators ( ptsH, ptsI ). Mpt and Mpo were previously designated as dispensable for virulence based on tissue culture assays, although were never tested in vivo . Most mutants under investigation were deficient for growth in NHP bile in vitro , which was unsurprising given the conditions under which the Tn-seq screen was performed. The most attenuated strains after growth in bile were ∆ptsI , ∆trxA , and ∆purB , which also displayed growth defects in rich medium. Additionally, the ∆atpB strain was severely attenuated in BHI, which was expected based on the published requirement for the F-type ATP synthase for anaerobic replication . In the aerobic condition, the cultures were incubated statically and we hypothesize that the lack of aeration led to ∆atpB growing more slowly than WT in BHI. Surprisingly, ∆atpB did replicate in NHP bile under anaerobic conditions. Future research will investigate the role of atpB in L. monocytogenes growth in bile and in vivo . It has been hypothesized that the F-type ATP synthase is required during anaerobic growth to combat acid stress and generate a proton motive force, rather than for ATP synthesis . It remains unclear, however, if these mechanisms contribute to the role of the F-type ATP synthase during infection. Tissue culture models of infection have historically been reliable indicators of L. monocytogenes pathogenesis in vivo , although the correlation was not as strong in this study. Several mutants displayed defects in a plaque assay, which measures both intracellular growth and intercellular spread over three days. Specifically, ∆ ccpA, ∆trxA, ∆purB , and clpX::Tn formed significantly smaller plaques than WT. Despite this, ∆ ccpA was surprisingly fully virulent in a murine model of listeriosis. Furthermore, mpt and mpo were completely dispensable for intracellular growth and intercellular spread in tissue culture, although they were required for infection of mice. Importantly, studies solely using tissue culture models of infection would not have identified these operons as important for pathogenesis in vivo. To assess the validity of the ex vivo NHP gallbladder model, we evaluated the roles of genes identified by Tn-seq in an oral murine model of listeriosis. Nearly all mutants tested were significantly attenuated in the gallbladders and livers of infected mice 4 dpi. The notable exception was the strain lacking ccpA, the c atabolite c ontrol p rotein A that represses transcription of metabolic genes based on the phosphorylation state of HPr and the overall nutrient status of the cell. In the ∆ ccpA mutant, approximately 100 genes are de-repressed , resulting in attenuated in vitro growth in rich medium, bile, and BMDMs, yet exhibiting no virulence defect in mice after oral infection. Interestingly, not all strains were attenuated in the spleens and MLNs of infected mice, suggesting that factors required for colonizing the liver and gallbladder are distinct from those needed to colonize other peripheral organs. For example, ∆ trxA was attenuated approximately 29,000-fold in the gallbladder, but colonized the spleen and MLN at levels similar to WT, despite a significant growth defect in rich medium in vitro . Relatedly, it was recently reported that L. monocytogenes folate metabolism is specifically required in the livers but not the spleens of infected mice [ , , ]. Conversely, ∆ ptsI and ∆ purB were significantly attenuated in the MLN at both 1 and 4 dpi, indicating that these genes are necessary for dissemination beyond the gut and/or replication in the MLN. Taken together, oral infections of mice revealed that the genes identified by Tn-seq in the NHP gallbladder have broader roles in disease pathogenesis than simply conferring resistance to bile stress. Similarly to the gallbladders, most mutants were significantly attenuated in the feces at 4 dpi, supporting the notion that the gallbladder is the primary source of fecally excreted bacteria . The two mutants not attenuated in the feces were ∆ccpA and ∆ purB. While ∆ ccpA was fully virulent in mice, the ∆ purB mutant was severely attenuated in vitro and in all murine organs after infection. These results indicate that purine biosynthesis is required for intracellular infection and virulence, but not for extracellular survival in the lumen of the lower GI tract or feces. The factors influencing fecal shedding of L. monocytogenes are incompletely understood. Zhang et al. used a barcoded library of L. monocytogenes to demonstrate that the gallbladder is the source of fecally shed bacteria and that neutrophils and monocytes restrict bacterial dissemination to the gallbladder . A subsequent study using a similar approach determined that the bacterial population in the feces was derived from the inoculum after infection with a severely attenuated strain of L. monocytogenes . Thus, multiple factors contribute to fecal shedding during listeriosis, including the level of gallbladder colonization, specific strain virulence capacity, and the host immune response. Taken together, our Tn-seq approach revealed several novel insights into L. monocytogenes carbon metabolism during infection. It is well-accepted that the primary carbon sources consumed by L. monocytogenes in the cytosol are host-derived glycerol and hexose-phosphates and thus, the permeases that import these sugars are required for intracellular replication and virulence [ – ]. Conversely, L. monocytogenes encodes 84 genes that assemble into 29 complete PTSs, which were previously thought to be dispensable for virulence . Indeed, the main glucose and mannose PTS EII proteins, encoded by the mpt and mpo operons, are not required for intracellular growth or intercellular spread . However, we found that strains lacking mpt or mpo are significantly attenuated in a murine model of listeriosis. These results suggest that glucose and mannose are important nutrients for L. monocytogenes replicating in extracellular sites in vivo , such as the gallbladder and, to a lesser extent, the liver. Recent studies established that significant populations of L. monocytogenes are extracellular in the liver, spleen, and MLN , although the bacterial requirements for surviving extracellularly and the role that extracellular bacteria play in pathogenesis remain unclear. Interestingly, infection with mutants lacking mpt and mpo resulted in similar bacterial loads in the MLN as WT at both 1 and 4 dpi, which suggests that these PTS operons are not required for dissemination beyond the GI tract. Moreover, all PTSs are activated by a phosphorelay between Enzyme I (encoded by ptsI ) and HPr (encoded by ptsH ). Thus, the ∆ ptsI mutant, which functionally lacks all 29 PTSs, is deficient for intracellular replication and dramatically attenuated in vivo. This suggests that PTS-dependent carbohydrates are important nutrients in the host cytosol. Ongoing studies are aimed at characterizing the additional PTSs that are required for full virulence.
Ethics statement This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All protocols were reviewed and approved by the Animal Care and Use Committee at the University of Washington (Protocol 4410–01). Bacterial strains and conditions The bacterial strains used in this study are listed in . L. monocytogenes was cultured in brain heart infusion (BHI) and E. coli was cultured in Luria-Bertani (LB) broth at 37°C, with shaking (220 rpm), unless otherwise specified. Antibiotics (purchased from Sigma Aldrich) were used at the following concentrations: streptomycin, 200 μg/mL; chloramphenicol, 10 μg/mL ( E. coli ) and 7.5 μg/mL ( L. monocytogenes ); and carbenicillin, 100 μg/mL. L. monocytogenes mutants were derived from wild type strain 10403S . Plasmids were introduced to E. coli via chemical competence and heat shock and introduced into L. monocytogenes via trans-conjugation from E. coli SM10 . Vector construction and cloning To construct in-frame, unmarked deletion mutants by allelic exchange in L. monocytogenes , ~700 bp regions up- and downstream of the gene of interest were PCR amplified using L. monocytogenes 10403S genomic DNA as a template. PCR products were digested and ligated into pLIM (gift from Arne Rietsche, Case Western). pLIM plasmids were then transformed into E. coli and sequences confirmed via Sanger sequencing (Azenta). Plasmids harboring mutant alleles were then introduced into L. monocytogenes via trans-conjugation and integrated into the chromosome as previously described . Complemented strains of L. monocytogenes were generated using the pPL2 integration plasmid . Genes were PCR amplified with their respective native promoters using L. monocytogenes 10403S genomic DNA as a template, and sequences were confirmed by Sanger sequencing. The constructed pPL2 plasmids were then introduced into L. monocytogenes by trans-conjugation and integration into the L. monocytogenes chromosome was confirmed by antibiotic resistance. Growth curves NHP bile aliquots were plated to evaluate sterility and stored at -80°C. Before an experiment, aliquots were thawed overnight at 4°C and warmed to room temperature immediately before the inoculation into a 96-well plate. Overnight L. monocytogenes cultures were washed twice, resuspended in PBS, and 10 5 CFU were inoculated into either NHP bile or BHI, in a total volume of 100 µL per well. Bacterial growth was measured by collecting samples of the cultures, serially diluting in PBS, and plating for CFU. For experiments performed anaerobically, NHP bile aliquots were thawed overnight in GasPak EZ Anaerobe gas-generating pouches (Becton Dickinson), and BHI and the 96-well plate were degassed overnight in a closed-system anaerobic chamber (Don Whitley Scientific A35 anaerobic work station). After washing and resuspending aerobically-grown overnight L. monocytogenes cultures in PBS, the L. monocytogenes suspensions and bile aliquots were transferred into the anaerobic chamber and the plate was inoculated and incubated within the chamber. Bacterial growth was measured by collecting samples of the cultures, serially diluting in PBS, and plating for CFU. To evaluate aerobic growth in rich medium, L. monocytogenes overnight cultures were normalized to an OD 600 of 0.02 in 25 mL BHI in 250-mL flasks and incubated at 37°C, with shaking. At each time point, bacteria were serially diluted and plated on BHI agar to enumerate CFU. Raw data are included in . L. monocytogenes transposon library in NHP gallbladders. The NHP gallbladders were obtained from animals at the WaNPRC that had been part of non-infectious experiments. The organs were transported between facilities on ice and used for experiments within 2 hours of excision. The animals included a mix of females and males, ages 8-12 years old, with body weights ranging from 10 – 13.7 kg. The L. monocytogenes transposon library was inoculated directly from the -80°C stock into BHI broth and incubated at 37°C for 2 hours, with shaking. The library was then washed twice and resuspended in PBS to a density of 10 8 CFU per 2 kg of NHP body weight. The inoculum size was determined to maintain 1,000-fold coverage of the library. Gallbladders were injected via syringe with 100 µL of inoculum, the injection site was sealed with liquid bandage (3M), and incubated in a dry 15 cm petri dish at 37°C, with 5% CO 2 . After 30 minutes, 200 µL of bile was removed from the gallbladder via syringe, serially diluted, and plated to enumerate CFU. 6 hours post-injection, bile was extracted from the organ via syringe for CFU enumeration and the remaining luminal contents collected via cell scraper after resection. The gallbladder contents were diluted into 50 mL BHI broth and incubated at 37°C for 2 hours, with shaking. The cultures were pelleted, washed twice with PBS, and stored at -80°C. Tn-seq library preparation, sequencing, and analysis Genomic DNA was extracted using a Quick-DNA Fungal/Bacterial MiniPrep Kit (Zymo Research). DNA was diluted to 3 µg/130 µL in microTUBES (Covaris) and sheared in duplicate on a Covaris LE220 Focused-Ultrasonicator using the following settings: duty cycle 10%; peak intensity 450; cycles per burst 100; duration 100 sec. Sheared DNA was then end-repaired with NEBNext End Repair (NEB), and purified with Ampure SPRIselect beads (Beckman Coulter). Poly-C tails were added to 1 µg of end-repaired DNA with Terminal Transferase (Promega), then purified with Ampure SPRIselect beads. Transposon junctions were PCR amplified with primers olj376 and pJZ_RND1 ( ) using 500 ng DNA and KAPA HiFi Hotstart Mix (Kapa Biosystems). PCR reactions were stopped once the inflection point of amplification was reached (6-14 cycles), and amplified transposon junctions were purified with Ampure SPRIselect beads. Barcoded adaptors were added using KAPA HiFi Hotstart Mix, and primers pJZ_RND2 and one TdT_Index per sample. DNA was purified and size-selected with Ampure SPRIselect beads for 250-450 bp fragments. Samples were pooled and sequenced as single end 50 bp reads on a NextSeq MO150 sequencer with a 7% PhiX spike in and primer pJZTnSq_SeqPrimer. Trimmed reads were mapped to the L. monocytogenes 10403S NC_17544 reference genome in PATRIC (now https://www.bv-brc.org/ ) and assessed for essentiality using TRANSIT software [ – ]. Genes were considered required for survival or growth in ex vivo NHP gallbladders if they met the following criteria: 5 or more insertion sites in the input libraries, a p value less than 0.05, and a 1.5-fold or greater depletion after incubation in the gallbladder. Murine cells L2 fibroblasts were incubated at 37°C in 5% CO2 in Dulbecco’s modified Eagle’s medium (DMEM) with 10% heat-inactivated fetal bovine serum (FBS) (Cytiva) and supplemented with sodium pyruvate (1 mM) and L-glutamine (1 mM) (L2 Medium). For passaging, cells were maintained in Pen-Strep (100 U/ml) but were plated in antibiotic-free media for infections. Bone marrow-derived macrophages (BMDMs) were routinely incubated in DMEM supplemented with 20% heat-inactivated FBS, 1 mM sodium pyruvate, 1 mM L-glutamine, 10% supernatant from M-CSF-producing 3T3 cells, and 55 μM β-mercaptoethanol (BMDM medium). BMDMs were isolated as previously described . Briefly, femurs and tibias from C57BL/6 mice bred in-house were crushed with a mortar and pestle in 20 mL BMDM medium and strained through 70-μm cell strainers. Cells were plated in 150-mm untreated culture dishes, supplemented with fresh BMDM medium at day 3, and then harvested by resuspending cells in cold PBS at day 7. BMDMs were aliquoted in 80% BMDM medium, 10% FBS, and 10% DMSO and stored in liquid nitrogen. Intracellular growth curves BMDMs were plated in TC-treated 24-well plates at a density of 6 x 10 5 cells per well in BMDM medium. L. monocytogenes cultures were grown overnight at 30°C, stationary. The next day, L. monocytogenes cultures were washed twice, resuspended in PBS, and added to BMDMs at an MOI = 0.1. After 30 minutes, cells were washed twice with PBS and BMDM medium containing gentamicin (50 µg/mL) was added to kill extracellular bacteria. At various time points post-infection, cells were washed twice with PBS and lysed in 250 μL cold 0.1% Triton-X in PBS. Lysates were then serially diluted and plated to enumerate intracellular CFU. Plaque assays Plaque assays were performed as previously described . In brief, 1.2 x 10 6 L2 fibroblasts were plated in tissue-culture treated 6-well plates overnight in L2 medium. L. monocytogenes cultures were grown overnight at 30°C stationary. The next day, L. monocytogenes cultures were diluted 1:10 in PBS and 5 µL of diluted bacteria was added to cell monolayers. After 1 hour of infection, monolayers were washed twice with PBS, then overlaid with 3 mL of molten agarose solution (1:1 mixture of 2X DMEM and 1.4% SuperPure Agarose (U.S. Biotech Sources, LLC), containing 10 µg/mL gentamicin). After 3 days of incubation, 2 mL of molten agarose solution containing Neutral Red was added to wells to visualize plaques. After 12-24 hours, plates were scanned, plaque areas quantified using ImageJ software and normalized to WT. Oral murine infections Female BALB/c mice were purchased from The Jackson Laboratory (Strain 000651) at 5 weeks of age and used in experiments when they were 6–7 weeks old. BALB/c mice were used because they are more susceptible to oral listeriosis and gallbladder colonization, in particular . Infections were performed as previously described . Streptomycin (5 mg/mL) was added to drinking water 48 hours prior to infection and food and water were removed 16 hours before infection. L. monocytogenes cultures were grown overnight at 30°C, stationary. Overnight cultures were diluted 1:10 in 5 mL fresh BHI and incubated at 37°C for 2 hours, with shaking. Bacteria were then washed twice and diluted in PBS. Mice were fed 10 8 bacteria in 20 µL of PBS and food and water were returned immediately after infection. Inocula were serially diluted and plated. Body weights were recorded daily and mice were humanely euthanized 1 and 4 days post-infection for tissue collection. Tissues were homogenized in the following volumes of 0.1% Igepal CA-630 (Sigma): MLN, 3 mL; cecum (contents removed and tissues rinsed with PBS), 4 mL; liver, 5 mL; spleen, 3 mL. Feces were homogenized in 1 mL of 0.1% Igepal with a sterile stick, and gallbladders were ruptured and crushed in 500 µL of 0.1% Igepal with a sterile stick. All samples were serially diluted in PBS and plated to enumerate CFU.
This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All protocols were reviewed and approved by the Animal Care and Use Committee at the University of Washington (Protocol 4410–01).
The bacterial strains used in this study are listed in . L. monocytogenes was cultured in brain heart infusion (BHI) and E. coli was cultured in Luria-Bertani (LB) broth at 37°C, with shaking (220 rpm), unless otherwise specified. Antibiotics (purchased from Sigma Aldrich) were used at the following concentrations: streptomycin, 200 μg/mL; chloramphenicol, 10 μg/mL ( E. coli ) and 7.5 μg/mL ( L. monocytogenes ); and carbenicillin, 100 μg/mL. L. monocytogenes mutants were derived from wild type strain 10403S . Plasmids were introduced to E. coli via chemical competence and heat shock and introduced into L. monocytogenes via trans-conjugation from E. coli SM10 .
To construct in-frame, unmarked deletion mutants by allelic exchange in L. monocytogenes , ~700 bp regions up- and downstream of the gene of interest were PCR amplified using L. monocytogenes 10403S genomic DNA as a template. PCR products were digested and ligated into pLIM (gift from Arne Rietsche, Case Western). pLIM plasmids were then transformed into E. coli and sequences confirmed via Sanger sequencing (Azenta). Plasmids harboring mutant alleles were then introduced into L. monocytogenes via trans-conjugation and integrated into the chromosome as previously described . Complemented strains of L. monocytogenes were generated using the pPL2 integration plasmid . Genes were PCR amplified with their respective native promoters using L. monocytogenes 10403S genomic DNA as a template, and sequences were confirmed by Sanger sequencing. The constructed pPL2 plasmids were then introduced into L. monocytogenes by trans-conjugation and integration into the L. monocytogenes chromosome was confirmed by antibiotic resistance.
NHP bile aliquots were plated to evaluate sterility and stored at -80°C. Before an experiment, aliquots were thawed overnight at 4°C and warmed to room temperature immediately before the inoculation into a 96-well plate. Overnight L. monocytogenes cultures were washed twice, resuspended in PBS, and 10 5 CFU were inoculated into either NHP bile or BHI, in a total volume of 100 µL per well. Bacterial growth was measured by collecting samples of the cultures, serially diluting in PBS, and plating for CFU. For experiments performed anaerobically, NHP bile aliquots were thawed overnight in GasPak EZ Anaerobe gas-generating pouches (Becton Dickinson), and BHI and the 96-well plate were degassed overnight in a closed-system anaerobic chamber (Don Whitley Scientific A35 anaerobic work station). After washing and resuspending aerobically-grown overnight L. monocytogenes cultures in PBS, the L. monocytogenes suspensions and bile aliquots were transferred into the anaerobic chamber and the plate was inoculated and incubated within the chamber. Bacterial growth was measured by collecting samples of the cultures, serially diluting in PBS, and plating for CFU. To evaluate aerobic growth in rich medium, L. monocytogenes overnight cultures were normalized to an OD 600 of 0.02 in 25 mL BHI in 250-mL flasks and incubated at 37°C, with shaking. At each time point, bacteria were serially diluted and plated on BHI agar to enumerate CFU. Raw data are included in . L. monocytogenes transposon library in NHP gallbladders. The NHP gallbladders were obtained from animals at the WaNPRC that had been part of non-infectious experiments. The organs were transported between facilities on ice and used for experiments within 2 hours of excision. The animals included a mix of females and males, ages 8-12 years old, with body weights ranging from 10 – 13.7 kg. The L. monocytogenes transposon library was inoculated directly from the -80°C stock into BHI broth and incubated at 37°C for 2 hours, with shaking. The library was then washed twice and resuspended in PBS to a density of 10 8 CFU per 2 kg of NHP body weight. The inoculum size was determined to maintain 1,000-fold coverage of the library. Gallbladders were injected via syringe with 100 µL of inoculum, the injection site was sealed with liquid bandage (3M), and incubated in a dry 15 cm petri dish at 37°C, with 5% CO 2 . After 30 minutes, 200 µL of bile was removed from the gallbladder via syringe, serially diluted, and plated to enumerate CFU. 6 hours post-injection, bile was extracted from the organ via syringe for CFU enumeration and the remaining luminal contents collected via cell scraper after resection. The gallbladder contents were diluted into 50 mL BHI broth and incubated at 37°C for 2 hours, with shaking. The cultures were pelleted, washed twice with PBS, and stored at -80°C.
transposon library in NHP gallbladders. The NHP gallbladders were obtained from animals at the WaNPRC that had been part of non-infectious experiments. The organs were transported between facilities on ice and used for experiments within 2 hours of excision. The animals included a mix of females and males, ages 8-12 years old, with body weights ranging from 10 – 13.7 kg. The L. monocytogenes transposon library was inoculated directly from the -80°C stock into BHI broth and incubated at 37°C for 2 hours, with shaking. The library was then washed twice and resuspended in PBS to a density of 10 8 CFU per 2 kg of NHP body weight. The inoculum size was determined to maintain 1,000-fold coverage of the library. Gallbladders were injected via syringe with 100 µL of inoculum, the injection site was sealed with liquid bandage (3M), and incubated in a dry 15 cm petri dish at 37°C, with 5% CO 2 . After 30 minutes, 200 µL of bile was removed from the gallbladder via syringe, serially diluted, and plated to enumerate CFU. 6 hours post-injection, bile was extracted from the organ via syringe for CFU enumeration and the remaining luminal contents collected via cell scraper after resection. The gallbladder contents were diluted into 50 mL BHI broth and incubated at 37°C for 2 hours, with shaking. The cultures were pelleted, washed twice with PBS, and stored at -80°C.
Genomic DNA was extracted using a Quick-DNA Fungal/Bacterial MiniPrep Kit (Zymo Research). DNA was diluted to 3 µg/130 µL in microTUBES (Covaris) and sheared in duplicate on a Covaris LE220 Focused-Ultrasonicator using the following settings: duty cycle 10%; peak intensity 450; cycles per burst 100; duration 100 sec. Sheared DNA was then end-repaired with NEBNext End Repair (NEB), and purified with Ampure SPRIselect beads (Beckman Coulter). Poly-C tails were added to 1 µg of end-repaired DNA with Terminal Transferase (Promega), then purified with Ampure SPRIselect beads. Transposon junctions were PCR amplified with primers olj376 and pJZ_RND1 ( ) using 500 ng DNA and KAPA HiFi Hotstart Mix (Kapa Biosystems). PCR reactions were stopped once the inflection point of amplification was reached (6-14 cycles), and amplified transposon junctions were purified with Ampure SPRIselect beads. Barcoded adaptors were added using KAPA HiFi Hotstart Mix, and primers pJZ_RND2 and one TdT_Index per sample. DNA was purified and size-selected with Ampure SPRIselect beads for 250-450 bp fragments. Samples were pooled and sequenced as single end 50 bp reads on a NextSeq MO150 sequencer with a 7% PhiX spike in and primer pJZTnSq_SeqPrimer. Trimmed reads were mapped to the L. monocytogenes 10403S NC_17544 reference genome in PATRIC (now https://www.bv-brc.org/ ) and assessed for essentiality using TRANSIT software [ – ]. Genes were considered required for survival or growth in ex vivo NHP gallbladders if they met the following criteria: 5 or more insertion sites in the input libraries, a p value less than 0.05, and a 1.5-fold or greater depletion after incubation in the gallbladder.
L2 fibroblasts were incubated at 37°C in 5% CO2 in Dulbecco’s modified Eagle’s medium (DMEM) with 10% heat-inactivated fetal bovine serum (FBS) (Cytiva) and supplemented with sodium pyruvate (1 mM) and L-glutamine (1 mM) (L2 Medium). For passaging, cells were maintained in Pen-Strep (100 U/ml) but were plated in antibiotic-free media for infections. Bone marrow-derived macrophages (BMDMs) were routinely incubated in DMEM supplemented with 20% heat-inactivated FBS, 1 mM sodium pyruvate, 1 mM L-glutamine, 10% supernatant from M-CSF-producing 3T3 cells, and 55 μM β-mercaptoethanol (BMDM medium). BMDMs were isolated as previously described . Briefly, femurs and tibias from C57BL/6 mice bred in-house were crushed with a mortar and pestle in 20 mL BMDM medium and strained through 70-μm cell strainers. Cells were plated in 150-mm untreated culture dishes, supplemented with fresh BMDM medium at day 3, and then harvested by resuspending cells in cold PBS at day 7. BMDMs were aliquoted in 80% BMDM medium, 10% FBS, and 10% DMSO and stored in liquid nitrogen.
BMDMs were plated in TC-treated 24-well plates at a density of 6 x 10 5 cells per well in BMDM medium. L. monocytogenes cultures were grown overnight at 30°C, stationary. The next day, L. monocytogenes cultures were washed twice, resuspended in PBS, and added to BMDMs at an MOI = 0.1. After 30 minutes, cells were washed twice with PBS and BMDM medium containing gentamicin (50 µg/mL) was added to kill extracellular bacteria. At various time points post-infection, cells were washed twice with PBS and lysed in 250 μL cold 0.1% Triton-X in PBS. Lysates were then serially diluted and plated to enumerate intracellular CFU.
Plaque assays were performed as previously described . In brief, 1.2 x 10 6 L2 fibroblasts were plated in tissue-culture treated 6-well plates overnight in L2 medium. L. monocytogenes cultures were grown overnight at 30°C stationary. The next day, L. monocytogenes cultures were diluted 1:10 in PBS and 5 µL of diluted bacteria was added to cell monolayers. After 1 hour of infection, monolayers were washed twice with PBS, then overlaid with 3 mL of molten agarose solution (1:1 mixture of 2X DMEM and 1.4% SuperPure Agarose (U.S. Biotech Sources, LLC), containing 10 µg/mL gentamicin). After 3 days of incubation, 2 mL of molten agarose solution containing Neutral Red was added to wells to visualize plaques. After 12-24 hours, plates were scanned, plaque areas quantified using ImageJ software and normalized to WT.
Female BALB/c mice were purchased from The Jackson Laboratory (Strain 000651) at 5 weeks of age and used in experiments when they were 6–7 weeks old. BALB/c mice were used because they are more susceptible to oral listeriosis and gallbladder colonization, in particular . Infections were performed as previously described . Streptomycin (5 mg/mL) was added to drinking water 48 hours prior to infection and food and water were removed 16 hours before infection. L. monocytogenes cultures were grown overnight at 30°C, stationary. Overnight cultures were diluted 1:10 in 5 mL fresh BHI and incubated at 37°C for 2 hours, with shaking. Bacteria were then washed twice and diluted in PBS. Mice were fed 10 8 bacteria in 20 µL of PBS and food and water were returned immediately after infection. Inocula were serially diluted and plated. Body weights were recorded daily and mice were humanely euthanized 1 and 4 days post-infection for tissue collection. Tissues were homogenized in the following volumes of 0.1% Igepal CA-630 (Sigma): MLN, 3 mL; cecum (contents removed and tissues rinsed with PBS), 4 mL; liver, 5 mL; spleen, 3 mL. Feces were homogenized in 1 mL of 0.1% Igepal with a sterile stick, and gallbladders were ruptured and crushed in 500 µL of 0.1% Igepal with a sterile stick. All samples were serially diluted in PBS and plated to enumerate CFU.
S1 Fig Growth curves of L. monocytogenes in BHI or bile in vitro. BHI or NHP bile was inoculated with L. monocytogenes , incubated statically in an aerobic incubator (A,B) or in an anaerobic chamber (C,D), and CFU were enumerated, as in . Data are the means and SEMs of at least three independent experiments. Statistics omitted for clarity. (TIF) S1 Table Tn-seq results. (XLSX) S2 Table Summary of significant Tn-seq hits. (DOCX) S3 Table Summary of mutant phenotypes. (DOCX) S4 Table Strains used in this study. (DOCX) S5 Table Primers used in Tn-seq library preparation. (DOCX) S6 Table All data. (XLSX)
|
Imaging Perfusion Changes in Oncological Clinical Applications by Hyperspectral Imaging: A Literature Review | 866289e5-01ce-4bac-aae6-c51efefa6f57 | 9784371 | Internal Medicine[mh] | Cancer is the leading health problem in the world. Only in the EU-27 each year are 2.7 million people diagnosed with cancer, while 1.3 million die from the disease. To deal with cancer, knowledge of cancer physiology is essential, where tissue perfusion is one of the most important physiological parameters. Perfusion of tumors is critical in their development and growth. Early studies have shown that tumor growth is dependent on the development of vasculature that has the capacity to supply oxygen and nutrients to dividing tumor cells. However, the vasculature is important not only for the supply of oxygen to tumors but also for the delivery of drugs into tumors. Finally, vasculature is also important for the response of tumors to surgery and other ablative techniques, such as radiotherapy and thermal and nonthermal ablative techniques. , It was demonstrated that information about the tumor and healthy tissue perfusion can improve therapy outcome either by guiding tumor resection , or monitoring the reperfusion of the resected tissues (e.g., anastomosis or tissue flaps). , Conventional techniques for perfusion imaging in oncology are CT and MR imaging. CT perfusion imaging provides information on tissue hemodynamics by analyzing the first passage of an intravenous contrast bolus through the vessels. On the other hand, MR perfusion imaging utilizes either endogenous or exogenous tracers. In the latter case, it is based on following an injected bolus of contrast agent over time, which is then used to determine the perfusion characteristics of tissues. While both imaging techniques are promising, radiation exposure (CT), potential adverse events due to contrast (CT/MRI), limited access (MRI), high cost (MRI), and inability to scan at the bedside or in operating theater are disadvantages of the conventional techniques. To address these shortcomings, various imaging techniques, including optical imaging, have been explored for tissue perfusion imaging. , In optical imaging, the optical contrast of tissues is intrinsically sensitive to tissue abnormalities, such as changes in oxygenation, blood concentration or scattering. , These changes are characteristic of many tumors, since they include angiogenesis, hypervascularization, hypermetabolism, and hypoxia, making optical imaging techniques promising candidates for perfusion imaging in oncology. Hyperspectral imaging (HSI) is an emerging optical imaging technique that uses light to obtain information about perfusion, or more specifically about oxygenation, water content or hemoglobin content of the tissue. The distinct advantage of HSI is that it is a noncontact, nonionizing, and noninvasive modality and does not require a contrast agent. HSI integrates conventional imaging and spectroscopy techniques by creating a set of images called a hypercube, which contains the spectral signature of the underlying tissue and in turn points to clinically relevant changes, such as angiogenesis or hypermetabolism. illustrates the structure and composition of hyperspectral images and physiological parameters derived from these images. HSI was originally employed in remote sensing applications , and then expanded into other fields, such as vegetation type and water source detection , , wood product control , drug analysis , food quality control - , artwork authenticity and restoration , , and security . HSI is also an attractive modality in the medical field and has been successfully applied for the detection of various types of tumors, particularly in conjunction with histopathologic diagnosis. - HSI has, inter alia , already proven value in plastic and vascular surgery, where assessing perfusion predicted the outcome of healing processes in transplants and wounds. , How valuable HSI could be in quantifying perfusion changes during interventions in clinical oncology remains unclear, and to that end, we decided to systematically review the literature with the intention of exclusively focusing only on studies in which HSI was performed on patients in the clinical oncology setting.
Two authors (R.H. and M.M.) conducted jointly – to preclude potential bias – a comprehensive literature search on October 3, 2022 through PubMed and Web of Science electronic databases using the following search terms: »hyperspectral imaging perfusion cancer« and »hyperspectral imaging resection cancer«. No restrictions in publication date or language were imposed. The inclusion criterion was the application of the hyperspectral imaging modality in the oncological clinical setting, meaning that all animal and phantom, ex vivo , experimental, research and development, and purely methodological studies were excluded. Special care was taken that duplications were removed, both across databases and across studies; for example, if the study was first published in proceedings and later in the journal, then proceedings article was considered a nonprimary publication and therefore excluded. Studies were categorized with respect to the anatomical location of the tumors.
A flow diagram of the selection strategy is shown in ; in total, 101 and 84 articles were found to be of interest in the PubMed and Web of Science databases, respectively. After excluding duplicates and applying the exclusion criteria, first considering the title and abstract, and next, if necessary, reading the entire article, 20 articles were identified for further analysis. The anatomical locations of tumors in the selected articles were as follows: kidneys (1 article), breasts (2 articles), eye (1 article), brain (4 articles), entire gastrointestinal (GI) tract (1 article), upper GI tract (5 articles) and lower GI tract (6 articles). Kidneys Pioneering effort in assessing perfusion by means of HSI in clinical oncology was the work of Best et al . They applied modality to monitor renal oxygenation during partial nephrectomy using the parameter called the percentage of oxyhemoglobin (HbO 2 ) and categorized 26 patients into the preoperative groups of high (>75% HbO 2 ) and low (<75% HbO 2 ) oxygenation. Parameter HbO 2 has proven useful before, during and after the application of a clamp, with an example of the image presented in . The study demonstrated that patients with low oxygenation had a statistically significant postoperative decline in estimated glomerular filtration rate. While further research is needed, HSI indicates potential for assessing susceptibility to renal ischemic injury in patients undergoing partial nephrectomy. Eye In the study of Rose et al . , clinicians used Doppler spectral domain optical coherence tomography (SD-OCT) in 8 patients diagnosed with radiation retinopathy to measure total retinal blood flow, while retinal blood oxygen saturation was quantified by a specially designed HSI retinal camera. They found that blood flow in the retinopathy eye was significantly lower than that in the fellow eye, while arteriolar oxygen saturation and venular oxygen saturation were higher in the retinopathy eye than in the fellow eye. Unfortunately, researchers conducted no follow-up studies, in which they would further evaluate microvascular changes due to radiation-induced retinopathy. Breasts Chin et al . studied a dose‒response relationship between radiation exposure and oxygenated hemoglobin in 43 women undergoing breast-conserving therapy radiation. The authors concluded that HSI may prove useful as an objective measure of patients’ skin response to radiation dose. However, they also noted that interpatient variability remains a challenge, as approximately 40% of the variability in change in oxygenated hemoglobin is accounted for by dose, 25% by individual woman, and 35% by causes that they could not identify. Pruimboom et al . used HSI in a prospective clinical pilot study enrolling women with breast reconstruction and detected mastectomy skin flap necrosis in 3 out of 10 patients. Somewhat analogously to the study of Best et al . , they found that tissue oxygenation was statistically significantly lower in the group of patients who developed flap necrosis than in the group of patients who did not. It appears that HSI is specifically suited for the early detection of flap necrosis, which could in turn aid in the timely and accurate debridement of necrotic tissue. Future work should confirm the modality’s potential also in identifying partial deep inferior epigastric artery perforator (DIEP) flap necrosis. Brain Fabelo et al . - developed an intraoperative HSI acquisition system and were able to assemble an in vivo hyperspectral human brain image database with the overall goal of accurately delineating tumor tissue from normal brain tissue. As the brain tumor typically infiltrates the surrounding tissue, it is extremely difficult to identify the border; in addition, both overresection of adjacent normal brain tissue and leaving tumor tissue behind have detrimental impacts on the results of the surgery and patient outcomes, either adversely affecting the patient’s quality of life or causing tumor progression. The work of Fabelo et al . was performed as a part of the European Future and Emerging Technologies (FET) project HELICoiD (HypErspectraL Imaging Cancer Detection). In their first methodological paper, they designed a special cancer detection algorithm utilizing spatial and spectral features of hyperspectral images from 5 patients with grade IV glioblastoma. They demonstrated that it was possible to accurately discriminate between normal tissue, tumor tissue, blood vessels and background by generating classification and segmentation maps in surgical time during neurosurgical operations, as shown in . In their second methodological paper , they used data from 6 patients with grade IV glioblastoma and applied improved algorithms to create maps, in which the parenchymal area of the brain could be delineated; an overall average accuracy of 80% was achieved. Their HSI system was systematically assessed at two clinical institutions enrolling 22 patients, and researchers found that results relevant for surgeons were obtained within 15 to 70 seconds. They also made available to the public this first in vivo hyperspectral human brain image database specifically designed for cancer detection. While authors were hopeful in their conclusion that HSI could facilitate brain tumor surgeries, no further studies beyond 2019 were published. HSI files from the studies by Fabelo and co-workers are available from http://hsibraindatabase.iuma.ulpgc.es database. Entire gastrointestinal tract During the past 3 years, the main focus of applying HSI in clinical oncology has been in the domain of the gastrointestinal tract, or more specifically, addressing anastomotic insufficiency, which is one of the most serious postsurgery complications of reconstructing the gastrointestinal conduit. As anastomotic healing fundamentally depends on adequate perfusion, HSI could be a suitable modality in assessing anastomotic perfusion in clinical practice. In a pilot study, Jansen-Winkeln et al . collected hyperspectral images in 47 patients who underwent gastrointestinal oncologic resection followed by esophageal, gastric, pancreatic, small bowel or colorectal anastomoses. The recorded hyperspectral images were analyzed to extract the following specific physiological tissue parameters, which were deemed characteristic for perfusion changes at the sites of anastomoses: oxygen saturation of the tissue ( StO 2 ), organ hemoglobin index ( OHI ), near-infrared perfusion index ( NIR-PI ), and tissue water index ( TWI ); the most clinically relevant appeared to be StO 2 . They concluded that intraoperative HSI provided a noncontact, noninvasive modality, which enabled real-time analysis of potential anastomotic leakage without the use of a contrast medium. Their group followed their initial work with several studies focusing on the upper and lower gastrointestinal tract, respectively, described in more detail below. Upper gastrointestinal tract Köhler et al . applied intraoperative HSI in 22 patients during esophagectomy to the tip of the gastric tube, which later became esophagogastric anastomosis; they compared physiological HSI parameters ( StO 2 , OHI , NIR PI and TWI ) in 14 patients who underwent laparoscopic gastrolysis and ischemic conditioning of the stomach with those in 8 patients without pretreatment. They noted that the values of physiological HSI parameters were higher in patients with ischemic preconditioning than in patients without ischemic preconditioning; however, only StO 2 exhibited weak statistical significance. In a single patient who developed anastomotic insufficiency of the intrathoracic esophagogastric anastomosis, all physiological HSI parameters were substantially lower than those in other patients. compares the NIR PI image recorded in this patient with the corresponding image taken in the patient without postoperative anastomotic leakage. Hybrid esophagectomy along with intraoperative HSI used in the paper of Köhler et al . was presented as a video article by Moulla et al . , while another clinical group corroborated the findings of Köhler et al . by reporting a case study including four patients. Hennig et al . continued the systematic evaluation of the capabilities of intraoperative HSI in 13 consecutive patients who underwent hybrid esophagectomy and reconstruction of the gastric conduit. Researchers also decided to use both intraoperative HSI and fluorescence imaging with indocyanine green (FI-ICG) to define the optimal position of anastomosis. While there are no threshold values yet established to define adequately and insufficiently perfused tissues, they decided that HSI physiological parameter StO 2 at >75% determined the well-perfused area. It was noteworthy that imaging modalities recorded simultaneously in 10 out of 13 patients identified the perfusion border zone more peripherally than the one designated subjectively by the surgeon. While HSI and FI-ICG may complement each other as intraoperative modalities, Hennig et al . were of the opinion that HSI may be advantageous due to “the lower costs, noninvasiveness, and lack of contraindications”. Moulla et al . expanded oncological clinical applications in the domain of pancreatic surgery. Hyperspectral images were recorded during pancreatoduodenectomy in 20 consecutive patients before and after gastroduodenal artery clamping. In this pilot study, they were able to detect by the means of physiologic HSI parameter StO 2 improvement in liver perfusion after median acute ligament division in one patient with celiac artery stenosis. The HSI acquisition system in the operating room is shown in . Lower gastrointestinal tract Jansen-Winkeln et al . applied intraoperative HSI in 24 patients to define the transection line during colorectal surgery. They found that the transection line subjectively delineated by the surgeon deviated from the border line determined by HSI; in 13 patients subjectively, planned resection was up to 13 mm too distal in the poorly perfused area, while in 11 patients, it was too far in the well-perfused area. Similar to esophagectomy , intraoperative HSI has shown potential in determining the optimal anastomotic area during colorectal surgery. Jansen-Winkeln et al . applied further intraoperative HSI along with FI-ICG in 32 consecutive patients undergoing colorectal resection and concluded that both modalities provided similar information in specifying the perfusion border zone and could complement each other. To optimize the performance of both modalities, Pfahl et al . constructed the combined FI-ICG and HSI system, which was tested in 128 patients. In another study , Jansen-Winkeln et al . imaged colorectal tumors in 54 consecutive patients during colorectal resections and found that HSI used in combination with a neural-network algorithm was able to classify cancer or adenomatous margins around the central tumor with a sensitivity of 86% and a specificity of 95%. Recently, they published a large study enrolling 115 patients who underwent colorectal resection to systematically assess the feasibility of HSI in quantifying tissue perfusion, and in accordance with a smaller patient series, they found that “well-perfused areas were clearly distinguishable from the less perfused ones only after one minute”. , Similar conclusions were reached in a group of 52 patients undergoing colorectal surgery by Barberio et al . , who also found that the physiological HSI parameter StO 2 was significantly lower in patients receiving neoadjuvant radio/chemotherapy than in other oncological patients. illustrates the usefulness of HSI in establishing the transection line during colorectal surgery.
Pioneering effort in assessing perfusion by means of HSI in clinical oncology was the work of Best et al . They applied modality to monitor renal oxygenation during partial nephrectomy using the parameter called the percentage of oxyhemoglobin (HbO 2 ) and categorized 26 patients into the preoperative groups of high (>75% HbO 2 ) and low (<75% HbO 2 ) oxygenation. Parameter HbO 2 has proven useful before, during and after the application of a clamp, with an example of the image presented in . The study demonstrated that patients with low oxygenation had a statistically significant postoperative decline in estimated glomerular filtration rate. While further research is needed, HSI indicates potential for assessing susceptibility to renal ischemic injury in patients undergoing partial nephrectomy.
In the study of Rose et al . , clinicians used Doppler spectral domain optical coherence tomography (SD-OCT) in 8 patients diagnosed with radiation retinopathy to measure total retinal blood flow, while retinal blood oxygen saturation was quantified by a specially designed HSI retinal camera. They found that blood flow in the retinopathy eye was significantly lower than that in the fellow eye, while arteriolar oxygen saturation and venular oxygen saturation were higher in the retinopathy eye than in the fellow eye. Unfortunately, researchers conducted no follow-up studies, in which they would further evaluate microvascular changes due to radiation-induced retinopathy.
Chin et al . studied a dose‒response relationship between radiation exposure and oxygenated hemoglobin in 43 women undergoing breast-conserving therapy radiation. The authors concluded that HSI may prove useful as an objective measure of patients’ skin response to radiation dose. However, they also noted that interpatient variability remains a challenge, as approximately 40% of the variability in change in oxygenated hemoglobin is accounted for by dose, 25% by individual woman, and 35% by causes that they could not identify. Pruimboom et al . used HSI in a prospective clinical pilot study enrolling women with breast reconstruction and detected mastectomy skin flap necrosis in 3 out of 10 patients. Somewhat analogously to the study of Best et al . , they found that tissue oxygenation was statistically significantly lower in the group of patients who developed flap necrosis than in the group of patients who did not. It appears that HSI is specifically suited for the early detection of flap necrosis, which could in turn aid in the timely and accurate debridement of necrotic tissue. Future work should confirm the modality’s potential also in identifying partial deep inferior epigastric artery perforator (DIEP) flap necrosis.
Fabelo et al . - developed an intraoperative HSI acquisition system and were able to assemble an in vivo hyperspectral human brain image database with the overall goal of accurately delineating tumor tissue from normal brain tissue. As the brain tumor typically infiltrates the surrounding tissue, it is extremely difficult to identify the border; in addition, both overresection of adjacent normal brain tissue and leaving tumor tissue behind have detrimental impacts on the results of the surgery and patient outcomes, either adversely affecting the patient’s quality of life or causing tumor progression. The work of Fabelo et al . was performed as a part of the European Future and Emerging Technologies (FET) project HELICoiD (HypErspectraL Imaging Cancer Detection). In their first methodological paper, they designed a special cancer detection algorithm utilizing spatial and spectral features of hyperspectral images from 5 patients with grade IV glioblastoma. They demonstrated that it was possible to accurately discriminate between normal tissue, tumor tissue, blood vessels and background by generating classification and segmentation maps in surgical time during neurosurgical operations, as shown in . In their second methodological paper , they used data from 6 patients with grade IV glioblastoma and applied improved algorithms to create maps, in which the parenchymal area of the brain could be delineated; an overall average accuracy of 80% was achieved. Their HSI system was systematically assessed at two clinical institutions enrolling 22 patients, and researchers found that results relevant for surgeons were obtained within 15 to 70 seconds. They also made available to the public this first in vivo hyperspectral human brain image database specifically designed for cancer detection. While authors were hopeful in their conclusion that HSI could facilitate brain tumor surgeries, no further studies beyond 2019 were published. HSI files from the studies by Fabelo and co-workers are available from http://hsibraindatabase.iuma.ulpgc.es database.
During the past 3 years, the main focus of applying HSI in clinical oncology has been in the domain of the gastrointestinal tract, or more specifically, addressing anastomotic insufficiency, which is one of the most serious postsurgery complications of reconstructing the gastrointestinal conduit. As anastomotic healing fundamentally depends on adequate perfusion, HSI could be a suitable modality in assessing anastomotic perfusion in clinical practice. In a pilot study, Jansen-Winkeln et al . collected hyperspectral images in 47 patients who underwent gastrointestinal oncologic resection followed by esophageal, gastric, pancreatic, small bowel or colorectal anastomoses. The recorded hyperspectral images were analyzed to extract the following specific physiological tissue parameters, which were deemed characteristic for perfusion changes at the sites of anastomoses: oxygen saturation of the tissue ( StO 2 ), organ hemoglobin index ( OHI ), near-infrared perfusion index ( NIR-PI ), and tissue water index ( TWI ); the most clinically relevant appeared to be StO 2 . They concluded that intraoperative HSI provided a noncontact, noninvasive modality, which enabled real-time analysis of potential anastomotic leakage without the use of a contrast medium. Their group followed their initial work with several studies focusing on the upper and lower gastrointestinal tract, respectively, described in more detail below.
Köhler et al . applied intraoperative HSI in 22 patients during esophagectomy to the tip of the gastric tube, which later became esophagogastric anastomosis; they compared physiological HSI parameters ( StO 2 , OHI , NIR PI and TWI ) in 14 patients who underwent laparoscopic gastrolysis and ischemic conditioning of the stomach with those in 8 patients without pretreatment. They noted that the values of physiological HSI parameters were higher in patients with ischemic preconditioning than in patients without ischemic preconditioning; however, only StO 2 exhibited weak statistical significance. In a single patient who developed anastomotic insufficiency of the intrathoracic esophagogastric anastomosis, all physiological HSI parameters were substantially lower than those in other patients. compares the NIR PI image recorded in this patient with the corresponding image taken in the patient without postoperative anastomotic leakage. Hybrid esophagectomy along with intraoperative HSI used in the paper of Köhler et al . was presented as a video article by Moulla et al . , while another clinical group corroborated the findings of Köhler et al . by reporting a case study including four patients. Hennig et al . continued the systematic evaluation of the capabilities of intraoperative HSI in 13 consecutive patients who underwent hybrid esophagectomy and reconstruction of the gastric conduit. Researchers also decided to use both intraoperative HSI and fluorescence imaging with indocyanine green (FI-ICG) to define the optimal position of anastomosis. While there are no threshold values yet established to define adequately and insufficiently perfused tissues, they decided that HSI physiological parameter StO 2 at >75% determined the well-perfused area. It was noteworthy that imaging modalities recorded simultaneously in 10 out of 13 patients identified the perfusion border zone more peripherally than the one designated subjectively by the surgeon. While HSI and FI-ICG may complement each other as intraoperative modalities, Hennig et al . were of the opinion that HSI may be advantageous due to “the lower costs, noninvasiveness, and lack of contraindications”. Moulla et al . expanded oncological clinical applications in the domain of pancreatic surgery. Hyperspectral images were recorded during pancreatoduodenectomy in 20 consecutive patients before and after gastroduodenal artery clamping. In this pilot study, they were able to detect by the means of physiologic HSI parameter StO 2 improvement in liver perfusion after median acute ligament division in one patient with celiac artery stenosis. The HSI acquisition system in the operating room is shown in .
Jansen-Winkeln et al . applied intraoperative HSI in 24 patients to define the transection line during colorectal surgery. They found that the transection line subjectively delineated by the surgeon deviated from the border line determined by HSI; in 13 patients subjectively, planned resection was up to 13 mm too distal in the poorly perfused area, while in 11 patients, it was too far in the well-perfused area. Similar to esophagectomy , intraoperative HSI has shown potential in determining the optimal anastomotic area during colorectal surgery. Jansen-Winkeln et al . applied further intraoperative HSI along with FI-ICG in 32 consecutive patients undergoing colorectal resection and concluded that both modalities provided similar information in specifying the perfusion border zone and could complement each other. To optimize the performance of both modalities, Pfahl et al . constructed the combined FI-ICG and HSI system, which was tested in 128 patients. In another study , Jansen-Winkeln et al . imaged colorectal tumors in 54 consecutive patients during colorectal resections and found that HSI used in combination with a neural-network algorithm was able to classify cancer or adenomatous margins around the central tumor with a sensitivity of 86% and a specificity of 95%. Recently, they published a large study enrolling 115 patients who underwent colorectal resection to systematically assess the feasibility of HSI in quantifying tissue perfusion, and in accordance with a smaller patient series, they found that “well-perfused areas were clearly distinguishable from the less perfused ones only after one minute”. , Similar conclusions were reached in a group of 52 patients undergoing colorectal surgery by Barberio et al . , who also found that the physiological HSI parameter StO 2 was significantly lower in patients receiving neoadjuvant radio/chemotherapy than in other oncological patients. illustrates the usefulness of HSI in establishing the transection line during colorectal surgery.
Based on this literature review, the following inferences could be made: HSI is still finding its place in oncological clinical applications with the assessment of (i) mastectomy skin flap perfusion after breast reconstructive surgery and (ii) anastomotic perfusion during reconstruction of gastrointenstinal conduit , , , - as the most promising. However, caution needs to be advised because recently much research has been done in the arena of using HSI during brain surgery for glioblastoma, yet this clinical effort has not been sustained. In addition, the need for an obvious expansion of the study of Pruimboom et al . to a larger patient group, which would also include cases of DIEP flap necrosis, a meaningful and robust establishment of cutoff values for physiological HSI parameters is mandatory if HSI is to retain its clinical appeal. In their study, oxygen saturation of tissue StO 2 appeared to be the most useful HSI index, and the cut-off value of 36.3% predicting tissue necrosis was found; this value was close to that defined by a pilot study enrolling mostly nononcological patients (19 out of 22), in which the values of both StO 2 and NIR PI above 40% indicated regular healing without any revision surgery; furthermore, operators in that study noted that HSI was superior to assessments based on clinical and Doppler ultrasound monitoring both in accuracy and speed. It is worthwhile to emphasize that HSI parameters are in general easy to follow by the operator as they are visualized as false-colour images . When evaluating applications of HSI in assessing anastomotic perfusion during reconstructing gastrointestinal conduits, two main challenges become apparent: (i) the first challenge is, as in the case of breast reconstructive surgery, related to the establishment of a clear cutoff value indicating adequate tissue perfusion so that the operator can convincingly identify the optimal anastomosis area; (ii) the second challenge is related to HSI being limited to open surgery due to the large size of the HSI camera. The first challenge will need to be approached by enrolling progressively larger patient groups undergoing various oncological surgical interventions. It appears that the group of Jansen-Winkeln et al . , is already moving in this direction by conducting progressively larger clinical studies. However, with the application of neural networks, requirements for cohort sizes become far higher but could also be partially satisfied with the data augmentation. The second challenge has been recently addressed by the same group , with ex vivo testing of laparoscopic HSI camera and a highlight that the clinical trial with minimally invasive HSI has commenced already. Comparison of HSI and FI-ICG , , revealed similar results in defining the perfusion border of anastomosis, while both modalities were documented to be reliable, fast, and intuitive. Even if HSI is completely noninvasive, injection of ICG rarely provokes allergic reactions. Since there is a potential for each of the two modalities to contribute complementary information, it is not surprising that Pfahl et al . constructed a combined HSI and FI-ICG recording system. In conclusion, HSI is at this stage emerging as an attractive imaging modality to quantify perfusion in oncological patients. Hopefully, a larger number of clinical sites will initiate clinical trials to address the challenges, which still preclude the final acceptance of this promising imaging technique in the oncological clinical setting.
|
Predicting the immunological nonresponse to antiretroviral therapy in people living with HIV: a machine learning-based multicenter large-scale study | 813fed72-f2c2-4bef-8282-1545832ad32d | 11933112 | Cytology[mh] | Introduction Highly active antiretroviral therapy (HAART) is regarded as the most efficacious approach to treating HIV infection, effectively suppressing viral replication and facilitating immune reconstitution ( ). However, there is increasing evidence that poor immune reconstitution remains a common issue in clinical practice, with prevalence rates potentially exceeding 10-40% ( ; ; ). Despite complete viral suppression by HAART, people living with HIV (PLWH) who experience immune non-response (INR) face increased risks of both AIDS-defining and non-AIDS-defining illnesses ( ; ; ). Consequently, clinical guidelines recommend using clinical immunological monitoring as an alternative biomarker of treatment response to identify non-responders to HAART early ( ; ). Subsequently, the recovery of CD4+ T cell counts post-HAART has gradually become one of the predictors of clinical prognosis in PLWH ( ; ; ). Numerous cohort studies have evaluated factors associated with CD4+ T cell recovery post-HAART, identifying that older age, lower baseline CD4+ T cell counts, higher baseline HIV RNA levels, reduced thymic function, increased T cell activation during treatment, and detectable viremia are all linked to poorer CD4+ T cell recovery ( ; ; ; ). In recent years, a variety of mathematical models have been developed for the prevention and treatment of HIV/AIDS ( ; ; ; ), which have provided theoretical guidance and recommendations for HIV treatment. However, the current models predominantly rely on traditional linear approaches such as logistic regression ( ). This gap suggests a need for more sophisticated modeling techniques that can integrate a broader range of biological markers and dynamic changes over time to enhance the prediction and management of HIV treatment outcomes. In this study, we aimed to identify risk factors for INR among PLWH in South China who have been treated with standard HAART for at least 2 years. The objective is to develop machine learning predictive models that utilize multiple clinical indicators from baseline, 6 months, and 12 months to predict whether they will experience INR after two years of HAART. This model will assist clinicians in timely predicting immune responses and implementing interventions to enhance immune function. Additionally, the calibration and diagnostic capabilities of the machine learning models were evaluated in both internal and external validation sets.
Methods 2.1 Study design and participants inclusion and exclusion criteria This study is based on the follow-up cohorts of PLWH at Nanfang Hospital and the Fifth Hospital of Zunyi, where participants have been undergoing long-term treatment and regular follow-ups at HIV clinics. A total of 1577 participants were enrolled based on defined inclusion and exclusion criteria. The inclusion criteria were: 1) a baseline CD4+ T cell counts of less than 350 cells/μL at the initiation of HAART, with continuous follow-up for 2 years, and two HIV RNA measurements of less than 50 copies/mL; 2) age 18 years or older, with complete baseline, 6-month, 12-month, and 24-month CD4+ T cell counts. The exclusion criteria included: 1) poor treatment adherence or a history of treatment interruption; 2) concurrent malignancy or long-term use of immunosuppressive medications; and 3) incomplete clinical data. As illustrated in , the cohort from Nanfang Hospital was divided into a training set and an internal validation set in a 7:3 ratio, while the cohort from the Fifth Hospital of Zunyi was designated as the external validation set. 2.2 Ethics approval and consent to participate The research received approval from the Institutional Ethics Committee of Nanfang Hospital (study identifier: NFEC-2021-448) and adhered to the Helsinki Declaration of 1964, along with its subsequent updates. Informed consent was obtained from all participants. 2.3 Data collection and definition We systematically collected demographic and clinical parameters of participants including age, gender, HAART regimens, HBsAg positivity, anti-HCV positivity, HIV viral load, and laboratory measurements at baseline, 6 months, 12 months, and 24 months into treatment. These measurements encompassed CD4+ T cell counts, CD8+ T cell counts, CD4/CD8 ratios, Platelet (PLT), creatinine (CR), hemoglobin (HGB), white blood cell count (WBC), aspartate aminotransferase (AST), alanine aminotransferase (ALT), triglycerides (TG), total cholesterol (CHOL), and fasting plasma glucose (FPG). The aforementioned data were obtained from clinical records or databases. Currently, there is no universally accepted definition for immune reconstitution failure. In this study, INR was defined as having two consecutive HIV RNA measurements <50 copies/mL after two years of HAART, still maintaining a CD4+ T lymphocyte count of <350 cells/µL ( ; ). 2.4 Construction, evaluation, and interpretation of predictive models In this study, variables from the training set that demonstrated significance at a p-value <0.05 in univariate analysis were included in the model construction. We employed several machine learning algorithms to predict INR classification, including the Logistic Regression Model (LRM), Random Forest (RF), XGBoost, Support Vector Machine (SVM), Naive Bayes, Decision Trees, neural network, and k-nearest Neighbors (KNN). To prevent overfitting and enhance the generalizability of the models, a 10-fold cross-validation method was employed for model evaluation, with iterative refinements through repeated trials. To further assess and compare the predictive performance of these models, we constructed receiver operating characteristic (ROC) curves and determined the area under the ROC curve (AUC). An AUC value closer to 1 indicates better predictive performance. Additionally, we utilized calibration curves to evaluate the consistency between the observed and predicted risks. The more the calibration curve of the model aligns with the 45 - degree line, and the closer the value of the Brier score is to 0, the more the predicted probability matches the observed event incidence. Furthermore, decision curve analysis (DCA) was used to evaluate the clinical utility of the models. By comparing the net benefits of the model with two default strategies (treating all or none), DCA provides insights into the clinical value of the models. To improve the interpretability of machine learning models, which are often regarded as “black box” models due to their complex and opaque decision-making processes, we applied Shapley Additive Explanations (SHAP) analysis. SHAP is a cooperative game theory-based approach that quantifies each feature’s contribution by assessing its influence on model predictions. A SHAP value greater than 0 indicates a positive contribution of the feature to the prediction, while a value less than 0 indicates a negative contribution. The larger the SHAP value, the greater the feature’s influence on the prediction. In our study, we visualized these contributions using importance ranking charts, which highlight the relative weight of each feature in influencing the outcome. Additionally, we employed partial dependence plots to demonstrate how each feature affects the predicted results, illustrating the relationship between individual features and the model’s output while considering the influence of other variables. 2.5 Statistical analysis In our analysis, datasets that conformed to a normal distribution were described using the mean ± standard deviation, and comparisons between two groups were conducted using Student’s t-test. For datasets that were non-normally distributed, comparisons were made based on the median and interquartile range, with the Mann-Whitney U test applied for statistical evaluation. Categorical variables were summarized as frequencies and percentages and analyzed using either the chi-square test or Fisher’s exact test, as appropriate. Independent risk factors for INR were identified through univariate and multivariate logistic regression analysis. To evaluate the dose-response relationship between continuous variables and INR, we employed restricted cubic splines (RCS). This method enables the visualization and quantification of potential non-linear associations, and by analyzing the shape of the dose-response curve, we can identify critical thresholds where the relationship between the predictor and the outcome changes. It is important to note that all aspects of data analysis and graphical representation were performed using R version 4.2.1. All tests conducted in this study were two-tailed, and a p-value <0.05 was considered statistically significant.
Study design and participants inclusion and exclusion criteria This study is based on the follow-up cohorts of PLWH at Nanfang Hospital and the Fifth Hospital of Zunyi, where participants have been undergoing long-term treatment and regular follow-ups at HIV clinics. A total of 1577 participants were enrolled based on defined inclusion and exclusion criteria. The inclusion criteria were: 1) a baseline CD4+ T cell counts of less than 350 cells/μL at the initiation of HAART, with continuous follow-up for 2 years, and two HIV RNA measurements of less than 50 copies/mL; 2) age 18 years or older, with complete baseline, 6-month, 12-month, and 24-month CD4+ T cell counts. The exclusion criteria included: 1) poor treatment adherence or a history of treatment interruption; 2) concurrent malignancy or long-term use of immunosuppressive medications; and 3) incomplete clinical data. As illustrated in , the cohort from Nanfang Hospital was divided into a training set and an internal validation set in a 7:3 ratio, while the cohort from the Fifth Hospital of Zunyi was designated as the external validation set.
Ethics approval and consent to participate The research received approval from the Institutional Ethics Committee of Nanfang Hospital (study identifier: NFEC-2021-448) and adhered to the Helsinki Declaration of 1964, along with its subsequent updates. Informed consent was obtained from all participants.
Data collection and definition We systematically collected demographic and clinical parameters of participants including age, gender, HAART regimens, HBsAg positivity, anti-HCV positivity, HIV viral load, and laboratory measurements at baseline, 6 months, 12 months, and 24 months into treatment. These measurements encompassed CD4+ T cell counts, CD8+ T cell counts, CD4/CD8 ratios, Platelet (PLT), creatinine (CR), hemoglobin (HGB), white blood cell count (WBC), aspartate aminotransferase (AST), alanine aminotransferase (ALT), triglycerides (TG), total cholesterol (CHOL), and fasting plasma glucose (FPG). The aforementioned data were obtained from clinical records or databases. Currently, there is no universally accepted definition for immune reconstitution failure. In this study, INR was defined as having two consecutive HIV RNA measurements <50 copies/mL after two years of HAART, still maintaining a CD4+ T lymphocyte count of <350 cells/µL ( ; ).
Construction, evaluation, and interpretation of predictive models In this study, variables from the training set that demonstrated significance at a p-value <0.05 in univariate analysis were included in the model construction. We employed several machine learning algorithms to predict INR classification, including the Logistic Regression Model (LRM), Random Forest (RF), XGBoost, Support Vector Machine (SVM), Naive Bayes, Decision Trees, neural network, and k-nearest Neighbors (KNN). To prevent overfitting and enhance the generalizability of the models, a 10-fold cross-validation method was employed for model evaluation, with iterative refinements through repeated trials. To further assess and compare the predictive performance of these models, we constructed receiver operating characteristic (ROC) curves and determined the area under the ROC curve (AUC). An AUC value closer to 1 indicates better predictive performance. Additionally, we utilized calibration curves to evaluate the consistency between the observed and predicted risks. The more the calibration curve of the model aligns with the 45 - degree line, and the closer the value of the Brier score is to 0, the more the predicted probability matches the observed event incidence. Furthermore, decision curve analysis (DCA) was used to evaluate the clinical utility of the models. By comparing the net benefits of the model with two default strategies (treating all or none), DCA provides insights into the clinical value of the models. To improve the interpretability of machine learning models, which are often regarded as “black box” models due to their complex and opaque decision-making processes, we applied Shapley Additive Explanations (SHAP) analysis. SHAP is a cooperative game theory-based approach that quantifies each feature’s contribution by assessing its influence on model predictions. A SHAP value greater than 0 indicates a positive contribution of the feature to the prediction, while a value less than 0 indicates a negative contribution. The larger the SHAP value, the greater the feature’s influence on the prediction. In our study, we visualized these contributions using importance ranking charts, which highlight the relative weight of each feature in influencing the outcome. Additionally, we employed partial dependence plots to demonstrate how each feature affects the predicted results, illustrating the relationship between individual features and the model’s output while considering the influence of other variables.
Statistical analysis In our analysis, datasets that conformed to a normal distribution were described using the mean ± standard deviation, and comparisons between two groups were conducted using Student’s t-test. For datasets that were non-normally distributed, comparisons were made based on the median and interquartile range, with the Mann-Whitney U test applied for statistical evaluation. Categorical variables were summarized as frequencies and percentages and analyzed using either the chi-square test or Fisher’s exact test, as appropriate. Independent risk factors for INR were identified through univariate and multivariate logistic regression analysis. To evaluate the dose-response relationship between continuous variables and INR, we employed restricted cubic splines (RCS). This method enables the visualization and quantification of potential non-linear associations, and by analyzing the shape of the dose-response curve, we can identify critical thresholds where the relationship between the predictor and the outcome changes. It is important to note that all aspects of data analysis and graphical representation were performed using R version 4.2.1. All tests conducted in this study were two-tailed, and a p-value <0.05 was considered statistically significant.
Result 3.1 Baseline characteristics and follow-up data changes in PLWH In the longitudinal cohort study of PLWH to predict the risk of INR during follow-up, we retrospectively included 903 PLWH from Nanfang Hospital and 674 PLWH from the Fifth Hospital of Zunyi University, who had been under treatment for more than two years. These cohorts served as the internal and external datasets, respectively. As shown in , Nanfang Hospital enrolled 903 participants, with 532 achieving immune response (IR) and 371 not achieving IR, while the Fifth Hospital of Zunyi University included 674 participants, with 408 in the IR group and 266 in the INR group. In both cohorts, the INR group exhibited significantly higher ages and viral loads compared to the IR group, while CD4+ T cell counts were notably lower in the INR group. There were no significant differences between the two groups in terms of gender, HAART regimens, and the prevalence of baseline HBsAg and anti-HCV. We visualized the clinical characteristics of PLWH at each follow-up point using line graphs ( ) and compared the levels between the IR group and the INR group. We observed that at each follow-up point, the IR group exhibited higher levels of CD4+ T cells, CD4/CD8 ratio, WBC counts, HGB levels, and PLT levels compared to the INR group. However, differences in CD8+ T cells, liver function markers such as ALT and AST, lipid levels including TG and CHOL, renal function as indicated by CR, and FPG were only present at certain follow-up points. A similar analysis was conducted in the external dataset ( ), and the results were consistent. The only exception was that the CD8+ T cell levels were also higher in the IR group compared to the INR group. 3.2 Independent risk factors associated with poor immune response in PLWH To investigate the factors influencing INR, we conducted a univariate logistic analysis that identified 20 significant variables ( ). Given the potential for multicollinearity among these variables, we conducted a collinearity test on variables with a p-value < 0.05 from the logistic univariate analysis by calculating the variance inflation factor (VIF) ( ). Since all parameters had a VIF value less than 10, all were included in the multivariate analysis and identified independent factors for INR as Baseline-CD4 (OR = 0.995, P = 0.030), 6M-CD4 (OR = 0.992, P < 0.001), 12M-CD4 (OR = 0.993, P < 0.001), Baseline-HGB (OR = 1.023, P = 0.002), and 6M-HGB (OR = 0.968, P = 0.014). To further analyze the relationship between baseline parameters and INR, we conducted the same analysis and found that in the multivariate analysis ( ), age (OR = 1.021, P = 0.010), HIV load (OR = 0.725, P = 0.009), baseline CD4 (OR = 0.983, P < 0.001), baseline WBC (OR = 0.842, P = 0.008) and baseline HGB (OR = 1.012, P = 0.014) were independently associated with INR. 3.3 Dose-response relationship between 6M-CD4, 12M-CD4, baseline-HGB, 6M-HGB and INR Through RCS analysis, we further investigated the relationship between independent factors and INR incidence ( ). We observed that 6M-CD4 and 6M-HGB showed a linear relationship with INR (overall p<0.05, nonlinearity p>0.05), with threshold concentrations of 273 cells/μL and 127.47 g/L, respectively. Conversely, a nonlinear relationship was evident between Baseline-CD4, 12M-CD4, Baseline-HGB, and INR (overall p<0.05, nonlinearity p<0.05). The risk of INR rapidly increased when Baseline-CD4 was below 165 cells/μl, 12M-CD4 was below 293 cells/μl, and Baseline-HGB was less than 125.23 g/L. 3.4 Model construction and verification We divided the internal dataset into a training set for model construction and an internal validation set following a 7:3 split, while the external dataset served as the models’ external validation set. We compared the baseline clinical characteristics across the three datasets ( ). The median age of PLWH in all three datasets was 32 years old, and the proportion of INR was similar across the datasets. Notably, the external validation set had a higher proportion of female PLWH and a lower proportion using INSTI-based treatment regimens. Subsequently, we incorporated significant variables in the univariate analysis in into model construction, including Baseline and 6-month/12-month CD4+ T cells, CD4/CD8 ratio, WBC, HGB, PLT and etc. Using these variables, we developed eight predictive models employing machine learning methods. We then validated the stability and generalizability of these eight models across the training, internal, and external validation sets. Ultimately, the RF model exhibited the best clinical predictive performance across all datasets, with AUROC values of 0.866, 0.943, and 0.897, respectively ( ). In terms of calibration, the RF model outperformed other models in all three datasets, with Brier scores of 0.136, 0.102, and 0.126 ( ). In clinical utility assessment, the DCA curves of the RF model were consistently higher than the “treat all” and most other model lines across the majority of threshold probabilities, indicating significant clinical application value ( ). 3.5 Interpretability of the optimal model Given the RF model’s outstanding predictive capability across both internal and external validation datasets, we ultimately designated it as the best-performing model. To clarify the clinical relevance of specific features, this research quantified their importance using SHAP values. The variables were prioritized by their impact on predicting INR risk ( ), identifying the top five predictors in PLWH after two years of HAART as 6-month CD4+ T cells, 12-month CD4+ T cells, baseline CD4+ T cells, 6-month CD4/CD8 ratio, and 12-month CD4/CD8 ratio. Consequently, CD4+ T cell counts measured between 6 and 12 months post-treatment are critical for assessing immune reconstitution. Through the summary plot ( ), we detailed the positive and negative relationships between features and outcomes, finding that higher CD4+T cell counts were associated with a lower probability of INR, and older age correlated with a higher probability of INR. Subsequently, we illustrated the impact of model variables in predictions for an example of PLWH with IR and INR respectively ( ). Finally, we generated a partial dependence plot ( ). Specifically, the critical threshold for CD4+ T cell counts was observed around 350 cells/µL at 12 months, 250 cells/µL at 6 months, and 150 cells/µL at baseline. For the 6-month CD4/CD8 ratio, maintaining a value near 0.5 was associated with minimizing INR risk. When the parameter values fall below these critical thresholds, the risk of INR increases. Nevertheless, it is noteworthy that the partial dependence analysis did not detect significant correlations between variables and age.
Baseline characteristics and follow-up data changes in PLWH In the longitudinal cohort study of PLWH to predict the risk of INR during follow-up, we retrospectively included 903 PLWH from Nanfang Hospital and 674 PLWH from the Fifth Hospital of Zunyi University, who had been under treatment for more than two years. These cohorts served as the internal and external datasets, respectively. As shown in , Nanfang Hospital enrolled 903 participants, with 532 achieving immune response (IR) and 371 not achieving IR, while the Fifth Hospital of Zunyi University included 674 participants, with 408 in the IR group and 266 in the INR group. In both cohorts, the INR group exhibited significantly higher ages and viral loads compared to the IR group, while CD4+ T cell counts were notably lower in the INR group. There were no significant differences between the two groups in terms of gender, HAART regimens, and the prevalence of baseline HBsAg and anti-HCV. We visualized the clinical characteristics of PLWH at each follow-up point using line graphs ( ) and compared the levels between the IR group and the INR group. We observed that at each follow-up point, the IR group exhibited higher levels of CD4+ T cells, CD4/CD8 ratio, WBC counts, HGB levels, and PLT levels compared to the INR group. However, differences in CD8+ T cells, liver function markers such as ALT and AST, lipid levels including TG and CHOL, renal function as indicated by CR, and FPG were only present at certain follow-up points. A similar analysis was conducted in the external dataset ( ), and the results were consistent. The only exception was that the CD8+ T cell levels were also higher in the IR group compared to the INR group.
Independent risk factors associated with poor immune response in PLWH To investigate the factors influencing INR, we conducted a univariate logistic analysis that identified 20 significant variables ( ). Given the potential for multicollinearity among these variables, we conducted a collinearity test on variables with a p-value < 0.05 from the logistic univariate analysis by calculating the variance inflation factor (VIF) ( ). Since all parameters had a VIF value less than 10, all were included in the multivariate analysis and identified independent factors for INR as Baseline-CD4 (OR = 0.995, P = 0.030), 6M-CD4 (OR = 0.992, P < 0.001), 12M-CD4 (OR = 0.993, P < 0.001), Baseline-HGB (OR = 1.023, P = 0.002), and 6M-HGB (OR = 0.968, P = 0.014). To further analyze the relationship between baseline parameters and INR, we conducted the same analysis and found that in the multivariate analysis ( ), age (OR = 1.021, P = 0.010), HIV load (OR = 0.725, P = 0.009), baseline CD4 (OR = 0.983, P < 0.001), baseline WBC (OR = 0.842, P = 0.008) and baseline HGB (OR = 1.012, P = 0.014) were independently associated with INR.
Dose-response relationship between 6M-CD4, 12M-CD4, baseline-HGB, 6M-HGB and INR Through RCS analysis, we further investigated the relationship between independent factors and INR incidence ( ). We observed that 6M-CD4 and 6M-HGB showed a linear relationship with INR (overall p<0.05, nonlinearity p>0.05), with threshold concentrations of 273 cells/μL and 127.47 g/L, respectively. Conversely, a nonlinear relationship was evident between Baseline-CD4, 12M-CD4, Baseline-HGB, and INR (overall p<0.05, nonlinearity p<0.05). The risk of INR rapidly increased when Baseline-CD4 was below 165 cells/μl, 12M-CD4 was below 293 cells/μl, and Baseline-HGB was less than 125.23 g/L.
Model construction and verification We divided the internal dataset into a training set for model construction and an internal validation set following a 7:3 split, while the external dataset served as the models’ external validation set. We compared the baseline clinical characteristics across the three datasets ( ). The median age of PLWH in all three datasets was 32 years old, and the proportion of INR was similar across the datasets. Notably, the external validation set had a higher proportion of female PLWH and a lower proportion using INSTI-based treatment regimens. Subsequently, we incorporated significant variables in the univariate analysis in into model construction, including Baseline and 6-month/12-month CD4+ T cells, CD4/CD8 ratio, WBC, HGB, PLT and etc. Using these variables, we developed eight predictive models employing machine learning methods. We then validated the stability and generalizability of these eight models across the training, internal, and external validation sets. Ultimately, the RF model exhibited the best clinical predictive performance across all datasets, with AUROC values of 0.866, 0.943, and 0.897, respectively ( ). In terms of calibration, the RF model outperformed other models in all three datasets, with Brier scores of 0.136, 0.102, and 0.126 ( ). In clinical utility assessment, the DCA curves of the RF model were consistently higher than the “treat all” and most other model lines across the majority of threshold probabilities, indicating significant clinical application value ( ).
Interpretability of the optimal model Given the RF model’s outstanding predictive capability across both internal and external validation datasets, we ultimately designated it as the best-performing model. To clarify the clinical relevance of specific features, this research quantified their importance using SHAP values. The variables were prioritized by their impact on predicting INR risk ( ), identifying the top five predictors in PLWH after two years of HAART as 6-month CD4+ T cells, 12-month CD4+ T cells, baseline CD4+ T cells, 6-month CD4/CD8 ratio, and 12-month CD4/CD8 ratio. Consequently, CD4+ T cell counts measured between 6 and 12 months post-treatment are critical for assessing immune reconstitution. Through the summary plot ( ), we detailed the positive and negative relationships between features and outcomes, finding that higher CD4+T cell counts were associated with a lower probability of INR, and older age correlated with a higher probability of INR. Subsequently, we illustrated the impact of model variables in predictions for an example of PLWH with IR and INR respectively ( ). Finally, we generated a partial dependence plot ( ). Specifically, the critical threshold for CD4+ T cell counts was observed around 350 cells/µL at 12 months, 250 cells/µL at 6 months, and 150 cells/µL at baseline. For the 6-month CD4/CD8 ratio, maintaining a value near 0.5 was associated with minimizing INR risk. When the parameter values fall below these critical thresholds, the risk of INR increases. Nevertheless, it is noteworthy that the partial dependence analysis did not detect significant correlations between variables and age.
Discussion In this study, we collected data from 1577 PLWH who received at least two years of HAART from two centers. On one hand, we analyzed the changes in clinical parameters at different follow-up points and identified independent risk factors for INR using univariate and multivariate logistic regression. On the other hand, we systematically constructed machine learning predictive models using dataset from Nanfang Hospital, which was further validated and assessed for sensitivity, specificity, and calibration using internal and external datasets. Our findings indicate that the RF model emerged as the best predictor for INR. To our knowledge, this was the first machine learning predictive model specifically developed to predict the occurrence of INR among PLWH in South China. This model not only provides a valuable tool for clinical decision-making but also enhances our understanding of the dynamics and predictors of immune recovery in this population. Machine learning’s capability to identify high-dimensional nonlinear relationships among clinical features for outcome prediction has been extensively applied in the field of HIV/AIDS research ( ; ; ; ). For example, researchers have utilized machine learning methods on electronic health records (EHR) data to precisely identify the burden of comorbidities in PLWH ( ). In recent years, traditional linear models have been used to predict INR ( ; ; ), and these models have provided auxiliary value in specific clinical practices. Unlike previous studies on INR prediction, this research included a comprehensive set of variables such as liver and kidney functions, lipid and glucose levels, and considers clinical indicators from multiple follow-up points. A machine learning model was constructed, taking into account not only these diverse clinical indicators but also ensuring rigorous internal and external validation of the model. This comprehensive approach enhances the predictive accuracy and reliability of the model, thereby making a significant contribution to clinical decision-making and the management of PLWH. In the line graphs, we observed that the levels of WBC, HGB, and PLT were significantly higher in the IR group, and multivariate logistic regression analysis indicated that baseline and 6-month HGB levels are independent risk factors for INR. Hematological alterations are prevalent complications in individuals with HIV/AIDS, linked to reduced quality of life and higher mortality rates ( ; ; ). Both direct and indirect influences of HIV infection on hematopoietic progenitor cells disturb bone marrow equilibrium and affect the proliferation and differentiation of cells in hematopoiesis, mainly leading to anemia and thrombocytopenia in peripheral blood ( ; ). Moreover, studies have shown that the improvement in CD4+ T cell counts following HAART leads to a decreased prevalence of cytopenias in PLWH, suggesting that HIV-related cytopenias are driven by HIV infection and immune suppression ( ; ). Therefore, this study not only reaffirms the connection between anemia and cytopenias with low CD4+T cell counts but also highlights the predictive value of thrombocytopenia and anemia in PLWH for INR. Considering that anemia and thrombocytopenia are treatable conditions associated with higher mortality rates in PLWH, it is essential to monitor blood cell count changes throughout HIV infection. This monitoring helps identify the onset of these hematological disorders and enables the implementation of vital clinical interventions to avert complications. To improve the interpretability of the model prediction process, we utilized SHAP values to quantify the impact of each variable on the model's predictions. The results indicated that the CD4+T cell counts at 6M and 12M were crucial factors affecting the occurrence of INR among PLWH. Previous research has frequently reported that baseline CD4+T cell counts was an effective predictor for INR ( ; ), with studies suggesting that a baseline CD4+T cell counts ≥200 cells/mm ( ) was independently associated with inconsistent immune response development in multivariate analysis ( ). However, this study highlights that, compared to baseline CD4 levels, the CD4+T cell counts at 6M and 12M require more attention. This shift in focus suggests a dynamic approach to monitoring immune recovery, emphasizing the importance of ongoing evaluation beyond initial treatment phases. It’s noteworthy that after interpreting the RF model using SHAP, we found that CD4+T cell levels and the CD4/CD8 ratio remained the most influential factors in the model. However, earlier research has shown that older age could contribute to insufficient CD4+ T-cell recovery in PLWH, indicating that age can substantially affect the long-term restoration of CD4+ T cells ( ; ). Additionally, research has included the age at the initiation of HAART in the logistic prediction model for INR ( ). Although age was a recognized factor in predicting INR, the partial dependence plot from the partial correlation analysis did not show a clear distributional association between age and CD4+ T cell counts, which might suggest more complex underlying relationships that are influenced by other factors included in the model. Machine learning models, especially those like RF, can capture complex, nonlinear interactions that might not be evident or are assumed away in traditional linear models. The occurrence of INR is closely associated with cytokine dysregulation ( ). Chronic inflammation induced by HIV infection can lead to sustained elevations of IL-6 and TNF-α, which impair bone marrow function and suppress hematopoiesis, resulting in reduced T cell production ( ; ). This process may contribute to anemia and thrombocytopenia, further hindering immune recovery. Additionally, individuals with INR exhibit elevated levels of immunosuppressive cytokines, such as IL-10 and TGF-β, which inhibit T cell proliferation ( ). Simultaneously, overexpression of PD-1 on CD4+ T cells promotes immune exhaustion, leading to limited proliferation and increased apoptosis ( ). In this study, CD4+ T cell counts were identified as significant predictors of INR, suggesting that chronic inflammation and T cell exhaustion may be potential mechanisms contributing to INR development. Our study possesses significant strengths. We have constructed machine learning predictive models for early identification of INR in PLWH, integrating multiple clinical indicators from baseline, 6-month, and 12-month follow-up points. The internal and external validations of the model have demonstrated its stability. Furthermore, the parameters used in the model are commonly available in standard clinical settings, requiring no additional measurements. This will assist clinicians in timely predicting immune responses and implementing interventions. Despite these strengths, we acknowledge some constraints in our research. To begin with, its retrospective nature may be affected by inherent drawbacks related to the study design. Additionally, as the study population is exclusively from South China, this raises uncertainties regarding the applicability and generalizability of our proposed predictive model to other populations or ethnic groups. Furthermore, due to limitations in time, resources, and study design, our research lacks mechanistic investigations like cytokine analysis, which could have provided further insights into the immune responses differentiating between responders and non-responders. These limitations highlight areas for future research to expand the model’s robustness and ensure its efficacy across diverse demographic settings.
Conclusion This study demonstrates that the Random Forest model has good performance in predicting the risk of INR among PLWH, facilitating early identification and intervention for INR in clinical settings.
|
Physician factors associated with medical errors in Norwegian primary care emergency services | 7c700166-f151-4fc4-b5b3-eb359a3474dc | 8725954 | Family Medicine[mh] | Patient safety incidents (PSIs) have been defined as any unintended or unexpected incident(s) that could have, or were judged to have, led to patient harm . Medical errors are the predominant factor in these incidents. These errors may be defined as an act of omission or commission in planning or execution that contributes or could contribute to an unintended result . There is considerable research on medical errors and patient safety in hospital settings. In a meta-analysis, the impact of the different medical specialties could not be explored . On this background, we consider that more knowledge on patient safety in the primary care setting is needed. Our project is aimed at elucidating this through a study material based on patient complaints and a randomized control group of corresponding physicians from the same units and time period. The occurrence of medical errors in primary care is relatively common . These errors have been considered preventable in more than 90% of detected cases . Out-of-hours consultations are known to be a setting of high risk for patient safety incidents . In primary care, the physicians may face different and varied working conditions. This includes units with several co-workers and solo practices. Diagnostic errors are reported as most common in primary care solo practice due to workload and inability to easily cross-reference with colleagues . This work situation is the regularity in primary care emergency units (PCEUs) in Norway. Considering the potential of health deterioration following medical errors in an emergency situation, learning from these errors is crucial. User surveys, reporting systems for healthcare, and patient complaints have been utilized . Reviewing the medical record (clinical auditing) is mandatory in identifying poor clinical performance . In its essence, the medical record related to an emergency situation may be deficient in describing the complete course of events. Studying unintentional incidents is consequently demanding [ , , ]. In the PCEUs quick decisions and immediate actions towards unknown patients without counselling are often required . This has induced the hypotheses that communication skills and experience are important factors in minimizing medical errors in these situations. It has been presumed that the perception of being understood may differ related to the physician’s gender, experience, and native language . In 2006 a Norwegian study of medicolegal assessments of complaints against general practitioners indicated an association between medical errors and male physicians and physicians with non-Norwegian citizenship . We have chosen to study physician factors that may lead to medical errors in PCEUs. In the first part of the study, we used a case-control design to focus on patient complaints, regardless of medical errors or not . We found that having a general practice, general practitioner (GP) specialty, or a high workload at the PCEU, was associated with a significantly reduced risk of evoking a complaint. Gender, seniority, and not having Norwegian citizenship at the time of authorization as a physician were not significantly associated with the risk of evoking a complaint. A complaint may be justified or not, and an error may be followed by a complaint or not. To uncover medical errors that may have led to patient harm, we have studied a group of physicians who had elicited a complaint working in PCEUs, and a random sample of physicians from the same PCEUs in the same period. The aim of this part of the study was to examine the associations between characteristics of the physicians working in PCEUs, their workload, and the outcome of the assessment of the medical records in the complaint-group and the random sample group.
Study setting In 2015 Norway had nearly 5.2 million inhabitants. The population density is low. In 2018 the number of PCEUs was 177: 75 covering one municipality and 102 covering more than one. Structural and organizational arrangements are underlying factors in studying PSIs. For general practitioners in Norway participation in out-of-hours service is an additional duty to their regular medical tasks . The qualification requirements for independent participating in this kind of duty, consist of at least 30 months of clinical work after authorization and having had at least 40 work shifts at medical emergency services provided by PCEUs . Participants and procedure At the time of planning the study, the consultation rate at the PCEUs in Norway was 260 per 1000 inhabitants per year. A review of 11 studies with different definitions of incidents and data collection methods calculated a rate of 5 to 80 incidents per 100 000 consultations, in which patients were harmed or may have been harmed . Based on these results and the requirements to participate in our study, we decided to include PCEUs that in total covered one-third of the Norwegian inhabitants, living in urban or rural parts of the country. The chosen units covered ∼1.7 million people. To reach this target population, we invited ten PCEUs to participate in this project. Six of these units were serving major cities and four were serving mainly rural areas. We stipulated that from this selection a total of about 250 patient complaints could be received in one year. This corresponds with a retrospective Irish study from out-of-hours GP . PCEUs in Norway use different electronic patient record systems without a common communicating platform. Because of this, a customized computerized data extraction programme for encrypted transmission of data from the medical records had to be developed. This customized computer programme randomly selected three control physicians (i.e. random sample group) for each case-physician (i.e. having evoked a complaint). In this way, records from four different physicians and four consultations were extracted for assessments. We mainly selected the largest PCEUs with staffing that was expected to be able to handle the number of complaints together with operating the customized computerized data-extraction programme. We assumed that requesting PCEUs at random for participation in the study could have elicited negative answers from a considerable number of PCEUs, that would have to refrain from participation because of lack of personnel to meet the requirements for handling sensitive data. Additionally, difficulties in recruiting PCEUs to participate in registrations studies are caused by understaffed administrations. The requirements to meet the ethical considerations on the use of sensitive personal data given by the ethics committee, led us to determine that only the larger cities could have the necessary full-time administrative position sizes. One of the city PCEUs had to refuse to participate due to the installation of a new system for electronic record keeping. In choosing counties with rural inter-municipal PCEUs, comparability in terms of population structure and staffing was governing. These PCEUs were granted technical support, if necessary, in handling the extraction programme. To facilitate a unified approach to this project, each PCEU was visited twice, and given oral and written guidelines for inclusion and exclusion of cases. They were also shown the use of the customized computerized data extraction programme. By this data extraction, we acquired the unique physician identification number (UPIN), and the parameters on workload during the fourteen days before the index consultation. This process was followed by assessments of the medical records for uncovering any medical error. A complaint was defined as any written utterance of discontent with the physician’s medical measures, sent directly to the PCEU or via external authorities. Excluded were complaints solely about rudeness, impoliteness, or poor communication, where it could not be presumed any significant harm to the patient’s health. The controls were three randomly selected physicians who had been on duty in the fourteen-day period prior to the case consultation. The computer programme selected these three control physicians from the same PCEU as the case physician. Consequently, a case physician could turn up as a control for another case and vice versa. The medical records were used for information about the physician characteristics and workload. For this, the UPIN was extracted together with the history of work shifts with numbers of patients during the fourteen-day period prior to the index consultation. The information extracted from the medical records was sent encrypted from the project employee to the proprietor of the LSR (Legestillingsregisteret - Norwegian physician position register). From this register information about the physicians was extracted. The LSR does not provide any information on citizenship change. Seniority was defined as the number of years after authorization. There were challenges in the data collection process, resulting in one-third of the anticipated number of 250 complaints . The data collection started September 1 st , 2015, and was extended to March 1 st , 2017. For all cases and randomized controls, the specified data were accessible in the medical records. The medical history was absent in one record, making the total number in the complaint group 77. The work shift roster at some of the minor units did not have three different physicians to choose from as controls for the fourteen-day period of inclusion, so the total number of controls was 231. The extractions from the LSR reduced the complete data-sets mainly due to unidentifiable UPIN, making 217 medical records in the random sample group available for reviewing. Missing data were only detected in this group (6.1%). Assessments The medical records in both groups were assessed by the first author (SZB), who has 40 years in general practice with 20 years of experience assessing medicolegal cases. A graded normative tool was used consisting of 13 medicolegal cases, ranging from considered potentially harmful to patients to being considered not harmful. This tool is described in a joint report from the Norwegian Board of Health Supervision and the Norwegian Medical Association. Fact boxes are used in presenting the decisive medical factors . In our study, the assessments were based on the different elements of the medical records including the measures implemented by the physicians. The assessments were divided according to this normative tool into three categories: Medical errors that may have led to harm or disadvantaged the health of the patient. No detectable medical errors of clinical significance (no errors). Inconclusive. The phrase “may have led to” reflects the fact that no objective post-encounter information was gathered, and is in agreement with the Norwegian legislation on reprimanding physicians in medicolegal cases . In this legislation, a medicolegal error is defined as applicable when the physician’s action may potentially significantly harm the patient. The category inconclusive consists of medical records with content that did not make it defensible to conclude whether a medical error had occurred or not. A medical audit was employed by using an experienced GP as a co-assessor (KS) to the first author (SZB). The two assessors discussed the inclusion of cases throughout the assessing process. In this way, the potentially controversial cases were picked out for peer review by the assessor and the co-assessor, for example, penicillin or broad-spectrum antibiotics or none, indication for hospitalization, etc. Variables The following characteristics regarding the physicians were used: gender, seniority, citizenship at authorization as physician (Norwegian or non-Norwegian), specialty in GP, and seniority. The physician identities and workload were extracted from the medical records, from the fourteen-day period prior to the consultation that elicited the complaint. By this, one consultation was extracted for each physician in both groups. The other characteristics were obtained from the LSR. The workload at the PCEU was defined as the extent of patient contacts and calculated as the number of patients divided by the number of work shifts and was grouped into five categories. The first category consisted of those having no work shift during the fourteen-day period prior to the index consultation. The remaining four categories were divided into quartiles defining workload: Low (1 to <6.6 no. of patients/no. of work shifts), Medium-low (6.6 to < 8.7), Medium-high (8.7 to < 12.0), and High (12.0 and higher). Statistical analyses The data used in this paper was based on material from a previous case-control study in which the case physicians had evoked a complaint . In the current study, the medical records have been assessed to study physician factors associated with medical errors. For this, we utilized the case-control data by analyzing cases (i.e. having evoked a complaint) and controls (i.e. random sample) separately. Numbers, percentages, means, and standard deviations (SD) were provided to describe the data. Associations between assessment of errors and the characteristics of the physicians and workload were tested by Chi-Square and t -tests. Due to low numbers in some categories of workload, Fisher’s exact test was applied in analyses of workload. The tests were done separately for the group that evoked a complaint and for the random sample group. The data were analyzed using Statistical Package for the Social Sciences (SPSS) (Version 25). Level of significance was set to α = 0.05. Ethical considerations The data collection was subjected to ethical considerations, and consent was obtained to retrieve personal sensitive information from the medical records (2013/99/REK vest – Regional Committee for Medical and Health Research Ethics West). This approval gave access to the medical records with the UPINs, and thereby the parameters on workload. Through this approval, the register data in the LSR were made available by the proprietor. All transmission of information was encrypted using Secure File Transfer Protocol. The premise for the collection of person-sensitive data was that the patients should be uniformly informed in writing about the project and that the identity of the patients and physicians would not be made known to the research group. The same procedure on information had to be applied to the potentially participating physicians. The societal benefit of the project was thereby considered to justify obtaining the described information. According to these preconditions, the data material was deidentified after retrieval of the necessary data.
In 2015 Norway had nearly 5.2 million inhabitants. The population density is low. In 2018 the number of PCEUs was 177: 75 covering one municipality and 102 covering more than one. Structural and organizational arrangements are underlying factors in studying PSIs. For general practitioners in Norway participation in out-of-hours service is an additional duty to their regular medical tasks . The qualification requirements for independent participating in this kind of duty, consist of at least 30 months of clinical work after authorization and having had at least 40 work shifts at medical emergency services provided by PCEUs .
At the time of planning the study, the consultation rate at the PCEUs in Norway was 260 per 1000 inhabitants per year. A review of 11 studies with different definitions of incidents and data collection methods calculated a rate of 5 to 80 incidents per 100 000 consultations, in which patients were harmed or may have been harmed . Based on these results and the requirements to participate in our study, we decided to include PCEUs that in total covered one-third of the Norwegian inhabitants, living in urban or rural parts of the country. The chosen units covered ∼1.7 million people. To reach this target population, we invited ten PCEUs to participate in this project. Six of these units were serving major cities and four were serving mainly rural areas. We stipulated that from this selection a total of about 250 patient complaints could be received in one year. This corresponds with a retrospective Irish study from out-of-hours GP . PCEUs in Norway use different electronic patient record systems without a common communicating platform. Because of this, a customized computerized data extraction programme for encrypted transmission of data from the medical records had to be developed. This customized computer programme randomly selected three control physicians (i.e. random sample group) for each case-physician (i.e. having evoked a complaint). In this way, records from four different physicians and four consultations were extracted for assessments. We mainly selected the largest PCEUs with staffing that was expected to be able to handle the number of complaints together with operating the customized computerized data-extraction programme. We assumed that requesting PCEUs at random for participation in the study could have elicited negative answers from a considerable number of PCEUs, that would have to refrain from participation because of lack of personnel to meet the requirements for handling sensitive data. Additionally, difficulties in recruiting PCEUs to participate in registrations studies are caused by understaffed administrations. The requirements to meet the ethical considerations on the use of sensitive personal data given by the ethics committee, led us to determine that only the larger cities could have the necessary full-time administrative position sizes. One of the city PCEUs had to refuse to participate due to the installation of a new system for electronic record keeping. In choosing counties with rural inter-municipal PCEUs, comparability in terms of population structure and staffing was governing. These PCEUs were granted technical support, if necessary, in handling the extraction programme. To facilitate a unified approach to this project, each PCEU was visited twice, and given oral and written guidelines for inclusion and exclusion of cases. They were also shown the use of the customized computerized data extraction programme. By this data extraction, we acquired the unique physician identification number (UPIN), and the parameters on workload during the fourteen days before the index consultation. This process was followed by assessments of the medical records for uncovering any medical error. A complaint was defined as any written utterance of discontent with the physician’s medical measures, sent directly to the PCEU or via external authorities. Excluded were complaints solely about rudeness, impoliteness, or poor communication, where it could not be presumed any significant harm to the patient’s health. The controls were three randomly selected physicians who had been on duty in the fourteen-day period prior to the case consultation. The computer programme selected these three control physicians from the same PCEU as the case physician. Consequently, a case physician could turn up as a control for another case and vice versa. The medical records were used for information about the physician characteristics and workload. For this, the UPIN was extracted together with the history of work shifts with numbers of patients during the fourteen-day period prior to the index consultation. The information extracted from the medical records was sent encrypted from the project employee to the proprietor of the LSR (Legestillingsregisteret - Norwegian physician position register). From this register information about the physicians was extracted. The LSR does not provide any information on citizenship change. Seniority was defined as the number of years after authorization. There were challenges in the data collection process, resulting in one-third of the anticipated number of 250 complaints . The data collection started September 1 st , 2015, and was extended to March 1 st , 2017. For all cases and randomized controls, the specified data were accessible in the medical records. The medical history was absent in one record, making the total number in the complaint group 77. The work shift roster at some of the minor units did not have three different physicians to choose from as controls for the fourteen-day period of inclusion, so the total number of controls was 231. The extractions from the LSR reduced the complete data-sets mainly due to unidentifiable UPIN, making 217 medical records in the random sample group available for reviewing. Missing data were only detected in this group (6.1%).
The medical records in both groups were assessed by the first author (SZB), who has 40 years in general practice with 20 years of experience assessing medicolegal cases. A graded normative tool was used consisting of 13 medicolegal cases, ranging from considered potentially harmful to patients to being considered not harmful. This tool is described in a joint report from the Norwegian Board of Health Supervision and the Norwegian Medical Association. Fact boxes are used in presenting the decisive medical factors . In our study, the assessments were based on the different elements of the medical records including the measures implemented by the physicians. The assessments were divided according to this normative tool into three categories: Medical errors that may have led to harm or disadvantaged the health of the patient. No detectable medical errors of clinical significance (no errors). Inconclusive. The phrase “may have led to” reflects the fact that no objective post-encounter information was gathered, and is in agreement with the Norwegian legislation on reprimanding physicians in medicolegal cases . In this legislation, a medicolegal error is defined as applicable when the physician’s action may potentially significantly harm the patient. The category inconclusive consists of medical records with content that did not make it defensible to conclude whether a medical error had occurred or not. A medical audit was employed by using an experienced GP as a co-assessor (KS) to the first author (SZB). The two assessors discussed the inclusion of cases throughout the assessing process. In this way, the potentially controversial cases were picked out for peer review by the assessor and the co-assessor, for example, penicillin or broad-spectrum antibiotics or none, indication for hospitalization, etc.
The following characteristics regarding the physicians were used: gender, seniority, citizenship at authorization as physician (Norwegian or non-Norwegian), specialty in GP, and seniority. The physician identities and workload were extracted from the medical records, from the fourteen-day period prior to the consultation that elicited the complaint. By this, one consultation was extracted for each physician in both groups. The other characteristics were obtained from the LSR. The workload at the PCEU was defined as the extent of patient contacts and calculated as the number of patients divided by the number of work shifts and was grouped into five categories. The first category consisted of those having no work shift during the fourteen-day period prior to the index consultation. The remaining four categories were divided into quartiles defining workload: Low (1 to <6.6 no. of patients/no. of work shifts), Medium-low (6.6 to < 8.7), Medium-high (8.7 to < 12.0), and High (12.0 and higher).
The data used in this paper was based on material from a previous case-control study in which the case physicians had evoked a complaint . In the current study, the medical records have been assessed to study physician factors associated with medical errors. For this, we utilized the case-control data by analyzing cases (i.e. having evoked a complaint) and controls (i.e. random sample) separately. Numbers, percentages, means, and standard deviations (SD) were provided to describe the data. Associations between assessment of errors and the characteristics of the physicians and workload were tested by Chi-Square and t -tests. Due to low numbers in some categories of workload, Fisher’s exact test was applied in analyses of workload. The tests were done separately for the group that evoked a complaint and for the random sample group. The data were analyzed using Statistical Package for the Social Sciences (SPSS) (Version 25). Level of significance was set to α = 0.05.
The data collection was subjected to ethical considerations, and consent was obtained to retrieve personal sensitive information from the medical records (2013/99/REK vest – Regional Committee for Medical and Health Research Ethics West). This approval gave access to the medical records with the UPINs, and thereby the parameters on workload. Through this approval, the register data in the LSR were made available by the proprietor. All transmission of information was encrypted using Secure File Transfer Protocol. The premise for the collection of person-sensitive data was that the patients should be uniformly informed in writing about the project and that the identity of the patients and physicians would not be made known to the research group. The same procedure on information had to be applied to the potentially participating physicians. The societal benefit of the project was thereby considered to justify obtaining the described information. According to these preconditions, the data material was deidentified after retrieval of the necessary data.
shows the distribution of the three assessment categories. In the group of physicians who had evoked a complaint, 53.2% of the medical records were classified as disclosing medical errors that may have led to harm, or been disadvantageous, to the health of the patient. In the random sample group, this percentage was 3.2. The proportion of inconclusive was similar for both groups (29.9 and 27.6%). No error was the conclusion for 16.9% in the compliant group. In the random sample group, this percentage was 69.1. The distribution of assessments of the information in the medical records by physician characteristics and workload are presented in for the complaint group and in for the random sample group. In the complaint-group female physicians had a higher percentage of no-errors (30.8%), and a lower percentage of medical records assessed as inconclusive (15.4%), compared to male physicians ( p = 0.027). However, there were no gender differences regarding medical records assessed as medical errors. No significant differences were found for the other variables ( ). The percentages of medical errors in the random sample group were 4.7 for female physicians and 2.3 for their male colleagues. There were no significant differences in this random sample group ( ). There were 68 physicians that evoked complaints, seven physicians had two complaints and one physician had three complaints. Among the 68 physicians, 28 also contributed as control physicians. Further, 139 physicians were only in the random sample group.
In this study on medical errors in Norwegian PCEUs, the essential finding was related to the gender of the physicians. Female physicians in the group who had evoked a complaint were assessed to have a higher proportion of no-errors and a lower proportion of records that were inconclusive for management assessments compared to their male colleagues. Seniority, citizenship, GP specialty and workload were not significantly associated with the outcome of the assessments of the medical records. In the random sample group there were no significant differences related to the included variables. In a previous paper on complaints, based on the same material, we found that there was no gender difference associated with the risk of evoking a complaint . In that study, the medical records were not assessed regarding medical errors. Other studies have revealed a male predominance in making medical errors [ , , , ]. This has been correlated with nonprofessional issues, such as female physicians working fewer hours than their male colleagues and having different work and practice types . Nevertheless, the gender difference has been stated as fundamental in a systematic review and meta-analysis . The underlying reasons may be the perceived female characteristics of empathy, self-knowledge, and communication skills . In the current study, we found no gender differences regarding assessments of the medical records in the random sample group. However, in the group of physicians who had evoked a complaint, the gender differences were related to no errors and inconclusive assessments. This may indicate differences in journaling between female and male physicians. Adequate journaling is mandatory to assess the quality of medical interventions and patient care. The fact that our study revealed gender differences related to no-errors and inconclusive assessments may indicate that female physicians are more thorough in journaling than their male colleagues. These findings may also coincide with a presumed group of male physicians with generally poor clinical performance, who often elicit complaints that reveal poor journaling, making the proper clinical assessment of their performance difficult. The medical record should contain the necessary information that is relevant to the patient’s reason for the encounter. With this, the medical record stands out as being the crucial tool for the physician to make the right decisions for the patient . Physician training expressed by seniority, workload or GP specialty, did not seem to have significant implications. In a previous paper, we discussed the finding related to the absence of importance of experience expressed by seniority as a doctor . The studies revealing higher rates of medical errors with increasing seniority were not confirmed by our study . It could be anticipated that increasing experience would simplify and improve the professional decision-making process. In this context, it should be expected that having more than one work shift during a fourteen-day period, would achieve the effect of training . However, the fourteen-day period may have been too short to reveal such an effect. On the other hand, a heavy workload did not contribute to medical errors. This may have been facilitated by better knowledge of the routines and the cooperative relations. A conceivable reason for not confirming the advantageous effect of the GP specialty may be the overall effect of the Norwegian qualification requirements for unrestricted work in a PCEU . Since 2012 a course on emergency medicine has been required for being qualified as a GP specialist . This all may be of decisive importance; while 57.6% of the physicians in the random sample group had this specialty, and the additional number of physicians in training for being qualified is unknown. On this background the persistence of the gender difference is remarkable. Language skills and cultural competence have been shown as a prerequisite for satisfactory communication, avoiding unfortunate events . Physicians who do not have Norwegian citizenship, may have their communication skills influenced by their native language and a divergent approach on the cultural basis in communicating with patients. Nevertheless, in studying patient complaints, citizenship did not seem to be an explanatory factor or significantly associated with the risk to evoke a complaint . Physicians with non-Norwegian citizenship may probably enhance their Norwegian communication skills during the years they work in Norway. However, our material did not allow any conclusions on associations between citizenship and increasing seniority. Compatible results for physicians with or without Norwegian citizenship, may be promoted by the Norwegian prerequisites for working in a PCEU and the required course in emergency medicine for getting qualified as a GP specialist . The consequence of these regulations is in line with the results of a study including graduates from foreign versus US medical schools, showing better patient outcomes with graduates from foreign schools . This is explained by a rigorous approach to incorporating international medical graduates. The physician’s attitude may induce a complaint. In this study, complaining about the behavior of the physician was not included. We acknowledge the fact that rudeness may be experienced as harmful. This is an important issue in ordinary general practice, where building trust and confidence are crucial parameters for following up. However, we doubt that this kind of behavior may be significantly medically harmful to the patient. We recognize that poor communication can cause the patient to omit symptoms or the doctor may omit follow-up questions. To unravel if these unfortunate conditions have been present medical records from follow-up consultations must be available. This was, however, not within the scope of the given ethical considerations. It is intriguing that for only 53.2% of the patient complaints a medical error was uncovered. This is consistent with a Norwegian study on medicolegal assessments . However, this does not support the assumption that nearly half of the complaints were unfounded. In the same way, disclosing sparse recording does not necessarily lead to the conclusion that the medical measures have been erroneous. As medical records in PCEUs often do not document the complete course of events, this inconclusiveness may be hiding deficiencies in managing the patients. These deficiencies may be assumed as the main reason for the proportion of medical records assessed as inconclusive in this study, i.e. making it inadequate to decide whether or not a medical error could have or had led to patient harm. Furthermore, the quality of medical records is measured by the covering of relevant and necessary information . The finding that only 3% of the medical records in the random sample group revealed medical errors, is consistent with larger studies from primary health care . As complaints and errors should be seen in relation to each other, the lack of concurrence in the results of our studies may be surprising . However, as a complaint is written in retrospect, the medical record written in connection with the consultation should be the basis for the assessments. Recently, a Norwegian study of the frequency and distribution of disciplinary actions for medical doctors found higher rates for physicians who work in small clinics or alone (GPs and private specialists) than for those working in large organizations (hospital doctors) . This emphasizes the impact of systemic and structural factors. However, the study design does not allow any conclusion related to the differences of impact of organizational or systemic factors on the decisions of disciplinary actions. The reported differences may thus partly be attributed to how these factors affect the assessment of the cases, for GPs and hospital doctors. In the current study, assessing journaling has presented itself as a cornerstone in learning from medical errors. Through further studies, the elements of the medical record should be analyzed to uncover the critical elements in obtaining information about the patient's reason for encounter and implemented measures. This means studying the underlying elements in the physician’s considerations and decisions documented in the medical record, including any additional notes revealing information that may have been available to the physician. This should include testing medical history–taking devices with the potential to increase the quality of anamnesis and differential diagnosis . To learn, we must use current knowledge and do more research to gain more knowledge to identify why errors occur. This must include the impact of factors like communication skills, behavior, and decision-making ability. Out-of-hours services need to focus on culture for learning, acknowledging the need for basic routines, and good leadership guided by qualitative studies. This may unravel the impact of factors like communication skills and behavior. The design and results of this study on medical errors may have the potential of guiding further research and facilitate reflection on drivers for improvements. Strengths and limitations The strength of this study is that it includes a group of physicians evoking a complaint and a random sample group, both groups with valid and nearly complete data sets. The proportion of inconclusive medical records was similar in the complaint group and the random sample group (29.9 and 27.6%). This substantiates the assumption of consistency in the assessments of the medical records in the two groups. Knowing about the complaints does not seem to have influenced the judgements. The use of a normative tool facilitated consistency in reviewing the medical records . The main weakness of the study is the unexpected low number of medical records included. There were several reasons for this: compatibility problems with the customized data extraction programme and the different electronic medical record systems, change of leadership during the study period at some PCEUs and heavy workload. The lack of electronic compatibility was the essential reason for one of the larger units. Broad-scale extraction of textual material from different electronic medical record systems is at present still not possible. The low number of medical records creates limitations for the application of the results of this study. It is a weakness of the study that we were not able to study communication problems and cultural competence among the participants. As smaller PCEUs with rather few participating physicians were included, the frequency of work shifts increases the probability to be picked up as control more than once. This may be a bias in this study, reflected by the lower number of individual physicians than should be expected from the number of cases. This does, however, not seem to have influenced the results. This study includes both a group of physicians evoking a complaint and a proper random sample group. It is a strength that both groups had valid and nearly complete data sets. However, we acknowledge the limitations of this material, as cases and controls were analyzed separately.
The strength of this study is that it includes a group of physicians evoking a complaint and a random sample group, both groups with valid and nearly complete data sets. The proportion of inconclusive medical records was similar in the complaint group and the random sample group (29.9 and 27.6%). This substantiates the assumption of consistency in the assessments of the medical records in the two groups. Knowing about the complaints does not seem to have influenced the judgements. The use of a normative tool facilitated consistency in reviewing the medical records . The main weakness of the study is the unexpected low number of medical records included. There were several reasons for this: compatibility problems with the customized data extraction programme and the different electronic medical record systems, change of leadership during the study period at some PCEUs and heavy workload. The lack of electronic compatibility was the essential reason for one of the larger units. Broad-scale extraction of textual material from different electronic medical record systems is at present still not possible. The low number of medical records creates limitations for the application of the results of this study. It is a weakness of the study that we were not able to study communication problems and cultural competence among the participants. As smaller PCEUs with rather few participating physicians were included, the frequency of work shifts increases the probability to be picked up as control more than once. This may be a bias in this study, reflected by the lower number of individual physicians than should be expected from the number of cases. This does, however, not seem to have influenced the results. This study includes both a group of physicians evoking a complaint and a proper random sample group. It is a strength that both groups had valid and nearly complete data sets. However, we acknowledge the limitations of this material, as cases and controls were analyzed separately.
In studying physician factors that may induce medical errors in PCEUs in Norway, medical records written by two groups of physicians were reviewed: a group of physicians who had evoked a complaint and a random sample of physicians. The only significant results were found in the complaint group. In this group, we found a higher percentage with no assessed medical error and a lower percentage with assessments of inconclusive medical records among female physicians compared to their male colleagues. Physician gender, seniority, citizenship, GP specialty, or workload were not significantly associated with assessed medical errors in a random sample of physicians. The Norwegian regulations on working in a PCEU, may have modulated the results. Future research should focus on the underlying elements of these findings, including journaling, organizational and structural factors.
|
Cardiac effects of two hallucinogenic natural products, | 4c6ac1b3-142b-485f-b123-c9312aa9d266 | 11862204 | Cardiovascular System[mh] | The natural occurring molecules 5-methoxy- N , N -dimethyl-tryptamine (5-MeO-DMT) and N , N -dimethyl-tryptamine (DMT) are hallucinogenic drugs. They occur in South America mainly in plants of the Amazonas region , . Perorally given alone, 5-MeO-DMT is inactivated rapidly by a first pass effect . Hence, 5-MeO-DMT has to be injected or has to be supplemented with inhibitors of the enzymatic degradation . These enzyme inhibitors could be antidepressant drugs like tranylcypromine. There are also reports in the literature that pure 5-MeO-DMT was mixed with plants extracts harmine or harmaline to inhibit the first pass effect . DMT and 5-MeO-DMT are chemically similar to serotonin (5-HT; 5-hydroxy-tryptamine) and therefore can bind to several seven known classes of serotonin receptors. Stimulation of brain 5-HT 2A -receptors explains the hallucinogenic effects of DMT and 5-MeO-DMT . DMT but not 5-MeO-DMT increased the beating rate in isolated rabbit hearts . However, serotonin acts in rabbit hearts by release of noradrenaline and not via serotonin receptors and hence rabbit hearts are not a good model for the human heart . In contrast to rabbits, in rats, 5-MeO-DMT decreased the heart rate . These effects were suggested to be due to stimulation of 5-HT 1 serotonin receptors . We had reported, in contrast, that 5-HT increased force of contraction in isolated atrial preparations from rats via 5-HT 2A serotonin receptors . However, as far as we know, inotropic effects of DMT and 5-MeO-DMT have not yet been reported via 5-HT 4 receptors from any species. We have chosen to study these tryptamine derivatives because they are naturally occurring hallucinogenic drugs, forbidden to use in many countries, but popular for “recreational” purposes and sometimes leading to fatal intoxications. Thus, from a clinical perspective it would be helpful to know whether or not these drugs act as agonists at cardiac 5-HT 4 receptors, because then 5-HT 4 receptor antagonists could be used to treat the cardiac side effects of these intoxications. Hence, we tested the hypothesis that DMT and 5-MeO-DMT act as agonists on cardiac human 5-HT 4 serotonin receptors (Fig. ). Parts of the data have been reported in abstract form , .
Transgenic mice A mouse with cardiomyocyte-specific expression of the human 5-HT 4a receptor has been generated in our laboratories . The cardiac myocyte-specific expression was achieved by the use of the α-myosin heavy chain promoter. The age of the animals studied in the atrial contraction experiments was around 154 days. All mice were housed under conditions of optimum light, temperature and humidity with food and water provided ad libitum. The investigation conformed to the Guide for the Care and Use of Laboratory Animals as published by the National Research Council (2011). The animals were handled and maintained according to the approved protocols of the Animal Welfare Committee of the University of Halle-Wittenberg, Halle, Germany. The study was conducted in accordance with ARRIVE guidelines . Contractile studies on mouse atrial preparations In brief, mice were euthanized by intraperitoneal injection of sodium pentobarbital (250 mg/kg body weight) . Then, the right and left atrial preparations were isolated and mounted in organ baths as previously described , . The bathing solution of the organ baths contained 119.8 mM NaCI, 5.4 mM KCI, 1.8 mM CaCl 2 , 1.05 mM MgCl 2 , 0.42 mM NaH 2 PO 4 , 22.6 mM NaHCO 3 , 0.05 mM Na 2 EDTA, 0.28 mM ascorbic acid and 5.05 mM glucose. The solution was continuously gassed with 95% O 2 and 5% CO 2 and maintained at 37 °C and pH 7.4 – . Spontaneously beating right atrial preparations from mice were used to study any chronotropic effects and the left atrial preparations were field stimulated with a frequency of 1 Hz to study force of contraction. The drug application was as follows. After equilibration was reached, 1 nM to 10 µM DMT or 5-MeO-DMT was added to the atrial preparations to establish concentration-response curves followed directly by a concentration-response curve of 5-HT (1 nM to 1 µM). This was to test if DMT or 5-MeO-DMT behave as full or partial agonists. After washout, again a concentration-response curve for DMT or 5-MeO-DMT (1 nM to 10 µM was performed to test if there are any desensitization effects that could compromise the results. Contractile studies on human atrial preparations The contractile studies on human preparations were done using the same setup and buffer as used in the mouse studies (see section above). The right atrial preparations were obtained from 14 male and two female patients aged 59–78 years (mean ± SD: 68.9 ± 6.4 years) undergoing bypass surgery. Further details on patient characteristics are summarized in Table . Our methods used for atrial contraction studies in human samples have been previously published and were not altered in this study – . This study has been performed in accordance with the Declaration of Helsinki. The study protocol was approved by the local ethics committee of the Medical Faculty of the Martin Luther University Halle-Wittenberg (Ethics approval number: hm-bü 04.08.2005) and all research was performed in accordance with relevant guidelines/regulations. Informed consent was obtained from all patients included in the study. Western blotting The homogenization of the samples, protein measurements, electrophoresis, primary and secondary antibody incubation and quantification were performed following our previously established protocols , – . Briefly, samples were homogenized in a buffer containing 10 mM NaHCO 3 and 5% SDS. Electrophoresis was performed in Novex™ 4–20% “Tris–Glycine Plus Midi Protein Gels” (Invitrogen, Thermo Fisher Scientific, Waltham, Massachusetts, USA). Subsequently, the proteins were transferred to a nitrocellulose membrane (Amersham Protran 0.45 µM, Cytiva, Germany) by wet transfer in a phosphate buffer (42 mM Na 2 HPO 4 , 8 mM NaH 2 PO 4 ) for four Ampere hours at 4 °C. Following primary antibodies were used: anti serine 16-phosphorylated phospholamban (PS16-PLB; 1:5000; #A010-12AP; Badrilla, Leeds, UK), anti calsequestrin as a cardiac myocytes-specific loading control (CSQ; 1:20.000; #ab3516; abcam, Cambridge, UK). The signals were visualized by using chemiluminescence (Immobilon™ Western, Millipore, Merck; Darmstadt, Germany) and a digital imaging system (Amersham ImageQuant 800; Cytiva Europe GmbH, Freiburg im Breisgau, Germany). Data analysis Data shown are means ± standard deviation. Recordings and primary analyses of contraction data were done with LabChart 8 (ADInstruments, Spechbach, Germany) and primary analyses of Western blots were performed with ImageQuant 10 (Cytiva, Freiburg, Germany). Statistical analyses and preparation of graphics were done with Prism 9.0 (GraphPad Software, San Diego, California, USA) using the analysis of variance followed by Bonferroni’s posttest. A p value < 0.05 was considered to be significant. Drugs and materials Serotonin (5-HT) hydrochloride was purchased from Sigma-Aldrich (Germany). Dimethyl-tryptamine (DMT) was purchased as solution (1 mg/ml in methanol) from Sigma-Aldrich (Germany) or as solid compound from LGC GmbH (Luckenwalde, Germany). 5-methoxy-dimethyl-tryptamine (5-MeO-DMT) was purchased as solid compound from Sigma-Aldrich (Germany) as well as from LGC GmbH (Luckenwalde, Germany). Both drugs were diluted/dissolved in a 50% DMSO and 50% water mixture and stored at − 20 °C. GR125487 was purchased from TOCRIS (Bio-Techne, Wiesbaden, Germany). All other chemicals were of the highest purity grade commercially available. Deionized water was used throughout the experiments. Stock solutions were prepared fresh daily.
A mouse with cardiomyocyte-specific expression of the human 5-HT 4a receptor has been generated in our laboratories . The cardiac myocyte-specific expression was achieved by the use of the α-myosin heavy chain promoter. The age of the animals studied in the atrial contraction experiments was around 154 days. All mice were housed under conditions of optimum light, temperature and humidity with food and water provided ad libitum. The investigation conformed to the Guide for the Care and Use of Laboratory Animals as published by the National Research Council (2011). The animals were handled and maintained according to the approved protocols of the Animal Welfare Committee of the University of Halle-Wittenberg, Halle, Germany. The study was conducted in accordance with ARRIVE guidelines .
In brief, mice were euthanized by intraperitoneal injection of sodium pentobarbital (250 mg/kg body weight) . Then, the right and left atrial preparations were isolated and mounted in organ baths as previously described , . The bathing solution of the organ baths contained 119.8 mM NaCI, 5.4 mM KCI, 1.8 mM CaCl 2 , 1.05 mM MgCl 2 , 0.42 mM NaH 2 PO 4 , 22.6 mM NaHCO 3 , 0.05 mM Na 2 EDTA, 0.28 mM ascorbic acid and 5.05 mM glucose. The solution was continuously gassed with 95% O 2 and 5% CO 2 and maintained at 37 °C and pH 7.4 – . Spontaneously beating right atrial preparations from mice were used to study any chronotropic effects and the left atrial preparations were field stimulated with a frequency of 1 Hz to study force of contraction. The drug application was as follows. After equilibration was reached, 1 nM to 10 µM DMT or 5-MeO-DMT was added to the atrial preparations to establish concentration-response curves followed directly by a concentration-response curve of 5-HT (1 nM to 1 µM). This was to test if DMT or 5-MeO-DMT behave as full or partial agonists. After washout, again a concentration-response curve for DMT or 5-MeO-DMT (1 nM to 10 µM was performed to test if there are any desensitization effects that could compromise the results.
The contractile studies on human preparations were done using the same setup and buffer as used in the mouse studies (see section above). The right atrial preparations were obtained from 14 male and two female patients aged 59–78 years (mean ± SD: 68.9 ± 6.4 years) undergoing bypass surgery. Further details on patient characteristics are summarized in Table . Our methods used for atrial contraction studies in human samples have been previously published and were not altered in this study – . This study has been performed in accordance with the Declaration of Helsinki. The study protocol was approved by the local ethics committee of the Medical Faculty of the Martin Luther University Halle-Wittenberg (Ethics approval number: hm-bü 04.08.2005) and all research was performed in accordance with relevant guidelines/regulations. Informed consent was obtained from all patients included in the study.
The homogenization of the samples, protein measurements, electrophoresis, primary and secondary antibody incubation and quantification were performed following our previously established protocols , – . Briefly, samples were homogenized in a buffer containing 10 mM NaHCO 3 and 5% SDS. Electrophoresis was performed in Novex™ 4–20% “Tris–Glycine Plus Midi Protein Gels” (Invitrogen, Thermo Fisher Scientific, Waltham, Massachusetts, USA). Subsequently, the proteins were transferred to a nitrocellulose membrane (Amersham Protran 0.45 µM, Cytiva, Germany) by wet transfer in a phosphate buffer (42 mM Na 2 HPO 4 , 8 mM NaH 2 PO 4 ) for four Ampere hours at 4 °C. Following primary antibodies were used: anti serine 16-phosphorylated phospholamban (PS16-PLB; 1:5000; #A010-12AP; Badrilla, Leeds, UK), anti calsequestrin as a cardiac myocytes-specific loading control (CSQ; 1:20.000; #ab3516; abcam, Cambridge, UK). The signals were visualized by using chemiluminescence (Immobilon™ Western, Millipore, Merck; Darmstadt, Germany) and a digital imaging system (Amersham ImageQuant 800; Cytiva Europe GmbH, Freiburg im Breisgau, Germany).
Data shown are means ± standard deviation. Recordings and primary analyses of contraction data were done with LabChart 8 (ADInstruments, Spechbach, Germany) and primary analyses of Western blots were performed with ImageQuant 10 (Cytiva, Freiburg, Germany). Statistical analyses and preparation of graphics were done with Prism 9.0 (GraphPad Software, San Diego, California, USA) using the analysis of variance followed by Bonferroni’s posttest. A p value < 0.05 was considered to be significant.
Serotonin (5-HT) hydrochloride was purchased from Sigma-Aldrich (Germany). Dimethyl-tryptamine (DMT) was purchased as solution (1 mg/ml in methanol) from Sigma-Aldrich (Germany) or as solid compound from LGC GmbH (Luckenwalde, Germany). 5-methoxy-dimethyl-tryptamine (5-MeO-DMT) was purchased as solid compound from Sigma-Aldrich (Germany) as well as from LGC GmbH (Luckenwalde, Germany). Both drugs were diluted/dissolved in a 50% DMSO and 50% water mixture and stored at − 20 °C. GR125487 was purchased from TOCRIS (Bio-Techne, Wiesbaden, Germany). All other chemicals were of the highest purity grade commercially available. Deionized water was used throughout the experiments. Stock solutions were prepared fresh daily.
Studies in the isolated left atria from mice We have previously shown that 5-HT increases the force of contraction in left atria from 5-HT 4 -TG, but not in left atria from WT . Here, as a next step, we wanted to compare those data with those of DMT and 5-MeO-DMT, and we wanted to determine whether they also exert positive inotropic effects in 5-HT 4 -TG. Like serotonin also its derivative DMT (original recording: Fig. A) raised force in a concentration- and time-dependent manner in left atrial preparations from 5-HT 4 -TG. The data on force of contraction are summarized in Fig. B and the time parameters of the contraction are summarized in Fig. D. Corresponding to the increase of the force at 10 µM DMT, the time of relaxation was shortened, indicative for a cAMP-dependent mechanism. Thereafter, we applied additionally in a cumulative way increasing concentrations of 5-HT (Fig. A). DMT was less potent and less effective than 5-HT (pEC 50 = 8.3) to raise force of contraction (Fig. A,B). Previously, we had noted that 5-HT rapidly and effectively desensitized the 5-HT 4 receptor under our experimental conditions . Hence, the question arose whether DMT would also lead to functional desensitization. Therefore, we washed out the effects of DMT and 5-HT (Fig. A, washout), and subsequently, DMT was reapplied cumulatively. Once more, DMT elicited a positive inotropic effect (Fig. A). These data are summarized in Fig. B (second CRC). In parallel, by using the spontaneously beating right atria, we estimated whether under these conditions DMT affected the beating rate. As depicted in Fig. C, DMT exerted a very small, negligible positive chronotropic effect. Like in Fig. A, we then added cumulatively 5-HT. DMT was less potent and less effective to raise beating rate than 5-HT (Fig. C). DMT, like 5-HT , failed to affect the force of contraction or the beating rate in WT (data not shown). Next, we tested 5-MeO-DMT and found that 5-MeO-DMT raised force in a concentration- and time-dependent manner in left atrial preparations from 5-HT 4 -TG (original recording: Fig. A). The data on force of contraction are summarized in Fig. B and the time parameters of the contraction are summarized in Fig. D. Compared to DMT, 5-MeO-DMT was more potent and effective to raise force of contraction and, accordingly, the shorting of time parameters was more pronounced (Fig. D). Thereafter, as done for DMT, we applied additionally in a cumulative way increasing concentrations of 5-HT (Fig. A). 5-MeO-DMT (pEC 50 ~ 5.8) seemed to be less potent but as effective as 5-HT to raise force of contraction (Fig. A). In other words, additionally applied 5-HT could not raise force of contraction further (Fig. A). However, given the experimental setup, it was not clearly possible to determine whether 5-MeO-DMT is a partial agonist in 5-HT 4 -TG, as the preparations had reached their maximal ability concerning contraction and beating rate. Therefore, the serotonin component was omitted from the graph of Fig. B. Finally, after washout, 5-MeO-DMT was applied again and 5-MeO-DMT induced again a positive inotropic effect (Fig. A). These data are summarized in Fig. B (second CRC). Here, also the right atria were used to study whether under these conditions 5-MeO-DMT affected the beating rate. As depicted in Fig. C and 5-MeO-DMT exerted a positive chronotropic effect. Like in left atria, we then added cumulatively 5-HT. 5-MeO-DMT seemed to be less potent but as effective as 5-HT to raise the beating rate, but for the reasons described above, the serotonin component was omitted from the graph of Fig. C. 5-MeO-DMT, like 5-HT , failed to affect the force of contraction or the beating rate in WT (data not shown). Even though we could not calculate the EC 50 values for DMT and 5-MeO-DMT because the concentration-response curves did not reach a plateau in the concentration range achievable in this study, it is obvious that 5-HT is the most potent compound, followed by 5-MeO-DMT, while DMT has the lowest potency. This order is reflected in the kinetic parameters of the substances, as shown in Fig. . The time to reach the maximum force of contraction was shortest for 5-HT and slowest for DMT (Fig. ). In detail, 5-HT (t max50 = 56 s) increased the force of contraction approximately twice as fast as 5-MeO-DMT (t max50 = 130 s) and 5-MeO-DMT increased the force of contraction approximately twice as fast as DMT (t max50 = 246 s) (Fig. ). As depicted in the scheme in Fig. , we hypothesized that DMT and 5-MeO-DMT would increase the phosphorylation state of phospholamban at serine-16 (PS16-PLB). Hence, a separate set of contraction experiments was performed. We added 10 µM DMT or 5-MeO-DMT to left atrial preparations of 5-HT 4 -TG and WT until the maximum effect was reached (10 min) and then froze the atria. From these frozen atria, we performed and quantified Western blots. We noted that 5-MeO-DMT, but not DMT, increased the phosphorylation state of phospholamban in left atrial preparations from 5-HT 4 -TG but not WT (Fig. ). This is depicted in original Western blots (Fig. A) and summarized in bar diagrams (Fig. B,C). A stimulation of β-adrenoceptors by isoprenaline was used as positive control. Moreover, boiling of the sample shifted the phospholamban band from the pentameric form to the monomeric form: this effect was used to clearly identify the phospholamban band in the Western blot (Fig. A). Studies in the isolated atria from humans Now, the question arose whether these functional effects are confined to transgenic mice or also have clinical relevance in humans. Hence, we studied human atrial preparations to measure force under electrically stimulated isometric conditions. In general, the contraction data from human preparations showed a larger scatter compared to mouse preparations, which is due to the heterogeneity regarding, e.g., age, genetic background, health status, disease and medication of the patients included in the study (Table ), which probably also applies to the serotonin receptor density. DMT alone did not increase force of contraction in isolated electrically paced right atrial muscle strips from patients ( n = 5), but in the presence of the phosphodiesterase III inhibitor cilostamide, a positive inotropic effect of 10 µM DMT was seen (Fig. A). This positive inotropic effect of DMT was antagonized by tropisetron (Fig. A) and by the 5-HT 4 receptor antagonist GR125487. The data are summarized in Fig. A. Tropisetron itself did not affect the force of contraction as demonstrated by the control experiments shown in Fig. E. The decline of force of contraction after 10 µM tropisetron was not different from the time-dependent decline of force of contraction (Fig. E). These control experiments were repeated three times, giving the same results. The effects of 5-MeO-DMT were different from those of DMT. In the atrial preparations from some patients ( N = 3) with an apparently high responsiveness to positive inotropic substances for unknown reasons, we noted that 5-MeO-DMT alone increased the force of contraction (Fig. B) and this increase was accompanied by an increased phospholamban phosphorylation (Fig. C). In the atrial preparations from other patients ( N = 9) with an apparently lower responsiveness to positive inotropic substances, 5-MeO-DMT increased the force of contraction, as observed for DMT, only in the presence of cilostamide (Fig. D). This positive inotropic effect of 10 µM 5-MeO-DMT was antagonized by tropisetron (Fig. D) as well as by the 5-HT 4 receptor antagonist GR125487 (Fig. F). The effect of 5-MeO-DMT was concentration-dependent und is summarized in Fig. B. In a further series of experiments, in the presence of cilostamide, concentration-response curves for DMT and 5-MeO-DMT were performed from 0.1 to 100 µM (Fig. C). Here, the potency and efficacy of DMT and 5-MeO-DMT appeared to be the same. Unfortunately, the pEC 50 values could not be calculated accurately because the plateau of the concentration-response curves was not reached even at 100 µM, a concentration usually not reached in humans. An approximate estimate gave a pEC 50 ≤ 4.5 for both DMT and 5-MeO-DMT in the human atrium. The evaluation of the time parameters revealed that the time to peak tension and the time of relaxation were shortened in a concentration-dependent manner by DMT and 5-MeO-DMT (Fig. A,C). The contraction kinetics of DMT and 5-MeO-DMT in the human atria (Fig. B,D) were similar to the kinetics found in the 5-HT 4 -TG atria (Fig. ). That is, 5-MeO-DMT reaches the maximum inotropic effect for a given concentration almost twice as fast as DMT.
We have previously shown that 5-HT increases the force of contraction in left atria from 5-HT 4 -TG, but not in left atria from WT . Here, as a next step, we wanted to compare those data with those of DMT and 5-MeO-DMT, and we wanted to determine whether they also exert positive inotropic effects in 5-HT 4 -TG. Like serotonin also its derivative DMT (original recording: Fig. A) raised force in a concentration- and time-dependent manner in left atrial preparations from 5-HT 4 -TG. The data on force of contraction are summarized in Fig. B and the time parameters of the contraction are summarized in Fig. D. Corresponding to the increase of the force at 10 µM DMT, the time of relaxation was shortened, indicative for a cAMP-dependent mechanism. Thereafter, we applied additionally in a cumulative way increasing concentrations of 5-HT (Fig. A). DMT was less potent and less effective than 5-HT (pEC 50 = 8.3) to raise force of contraction (Fig. A,B). Previously, we had noted that 5-HT rapidly and effectively desensitized the 5-HT 4 receptor under our experimental conditions . Hence, the question arose whether DMT would also lead to functional desensitization. Therefore, we washed out the effects of DMT and 5-HT (Fig. A, washout), and subsequently, DMT was reapplied cumulatively. Once more, DMT elicited a positive inotropic effect (Fig. A). These data are summarized in Fig. B (second CRC). In parallel, by using the spontaneously beating right atria, we estimated whether under these conditions DMT affected the beating rate. As depicted in Fig. C, DMT exerted a very small, negligible positive chronotropic effect. Like in Fig. A, we then added cumulatively 5-HT. DMT was less potent and less effective to raise beating rate than 5-HT (Fig. C). DMT, like 5-HT , failed to affect the force of contraction or the beating rate in WT (data not shown). Next, we tested 5-MeO-DMT and found that 5-MeO-DMT raised force in a concentration- and time-dependent manner in left atrial preparations from 5-HT 4 -TG (original recording: Fig. A). The data on force of contraction are summarized in Fig. B and the time parameters of the contraction are summarized in Fig. D. Compared to DMT, 5-MeO-DMT was more potent and effective to raise force of contraction and, accordingly, the shorting of time parameters was more pronounced (Fig. D). Thereafter, as done for DMT, we applied additionally in a cumulative way increasing concentrations of 5-HT (Fig. A). 5-MeO-DMT (pEC 50 ~ 5.8) seemed to be less potent but as effective as 5-HT to raise force of contraction (Fig. A). In other words, additionally applied 5-HT could not raise force of contraction further (Fig. A). However, given the experimental setup, it was not clearly possible to determine whether 5-MeO-DMT is a partial agonist in 5-HT 4 -TG, as the preparations had reached their maximal ability concerning contraction and beating rate. Therefore, the serotonin component was omitted from the graph of Fig. B. Finally, after washout, 5-MeO-DMT was applied again and 5-MeO-DMT induced again a positive inotropic effect (Fig. A). These data are summarized in Fig. B (second CRC). Here, also the right atria were used to study whether under these conditions 5-MeO-DMT affected the beating rate. As depicted in Fig. C and 5-MeO-DMT exerted a positive chronotropic effect. Like in left atria, we then added cumulatively 5-HT. 5-MeO-DMT seemed to be less potent but as effective as 5-HT to raise the beating rate, but for the reasons described above, the serotonin component was omitted from the graph of Fig. C. 5-MeO-DMT, like 5-HT , failed to affect the force of contraction or the beating rate in WT (data not shown). Even though we could not calculate the EC 50 values for DMT and 5-MeO-DMT because the concentration-response curves did not reach a plateau in the concentration range achievable in this study, it is obvious that 5-HT is the most potent compound, followed by 5-MeO-DMT, while DMT has the lowest potency. This order is reflected in the kinetic parameters of the substances, as shown in Fig. . The time to reach the maximum force of contraction was shortest for 5-HT and slowest for DMT (Fig. ). In detail, 5-HT (t max50 = 56 s) increased the force of contraction approximately twice as fast as 5-MeO-DMT (t max50 = 130 s) and 5-MeO-DMT increased the force of contraction approximately twice as fast as DMT (t max50 = 246 s) (Fig. ). As depicted in the scheme in Fig. , we hypothesized that DMT and 5-MeO-DMT would increase the phosphorylation state of phospholamban at serine-16 (PS16-PLB). Hence, a separate set of contraction experiments was performed. We added 10 µM DMT or 5-MeO-DMT to left atrial preparations of 5-HT 4 -TG and WT until the maximum effect was reached (10 min) and then froze the atria. From these frozen atria, we performed and quantified Western blots. We noted that 5-MeO-DMT, but not DMT, increased the phosphorylation state of phospholamban in left atrial preparations from 5-HT 4 -TG but not WT (Fig. ). This is depicted in original Western blots (Fig. A) and summarized in bar diagrams (Fig. B,C). A stimulation of β-adrenoceptors by isoprenaline was used as positive control. Moreover, boiling of the sample shifted the phospholamban band from the pentameric form to the monomeric form: this effect was used to clearly identify the phospholamban band in the Western blot (Fig. A).
Now, the question arose whether these functional effects are confined to transgenic mice or also have clinical relevance in humans. Hence, we studied human atrial preparations to measure force under electrically stimulated isometric conditions. In general, the contraction data from human preparations showed a larger scatter compared to mouse preparations, which is due to the heterogeneity regarding, e.g., age, genetic background, health status, disease and medication of the patients included in the study (Table ), which probably also applies to the serotonin receptor density. DMT alone did not increase force of contraction in isolated electrically paced right atrial muscle strips from patients ( n = 5), but in the presence of the phosphodiesterase III inhibitor cilostamide, a positive inotropic effect of 10 µM DMT was seen (Fig. A). This positive inotropic effect of DMT was antagonized by tropisetron (Fig. A) and by the 5-HT 4 receptor antagonist GR125487. The data are summarized in Fig. A. Tropisetron itself did not affect the force of contraction as demonstrated by the control experiments shown in Fig. E. The decline of force of contraction after 10 µM tropisetron was not different from the time-dependent decline of force of contraction (Fig. E). These control experiments were repeated three times, giving the same results. The effects of 5-MeO-DMT were different from those of DMT. In the atrial preparations from some patients ( N = 3) with an apparently high responsiveness to positive inotropic substances for unknown reasons, we noted that 5-MeO-DMT alone increased the force of contraction (Fig. B) and this increase was accompanied by an increased phospholamban phosphorylation (Fig. C). In the atrial preparations from other patients ( N = 9) with an apparently lower responsiveness to positive inotropic substances, 5-MeO-DMT increased the force of contraction, as observed for DMT, only in the presence of cilostamide (Fig. D). This positive inotropic effect of 10 µM 5-MeO-DMT was antagonized by tropisetron (Fig. D) as well as by the 5-HT 4 receptor antagonist GR125487 (Fig. F). The effect of 5-MeO-DMT was concentration-dependent und is summarized in Fig. B. In a further series of experiments, in the presence of cilostamide, concentration-response curves for DMT and 5-MeO-DMT were performed from 0.1 to 100 µM (Fig. C). Here, the potency and efficacy of DMT and 5-MeO-DMT appeared to be the same. Unfortunately, the pEC 50 values could not be calculated accurately because the plateau of the concentration-response curves was not reached even at 100 µM, a concentration usually not reached in humans. An approximate estimate gave a pEC 50 ≤ 4.5 for both DMT and 5-MeO-DMT in the human atrium. The evaluation of the time parameters revealed that the time to peak tension and the time of relaxation were shortened in a concentration-dependent manner by DMT and 5-MeO-DMT (Fig. A,C). The contraction kinetics of DMT and 5-MeO-DMT in the human atria (Fig. B,D) were similar to the kinetics found in the 5-HT 4 -TG atria (Fig. ). That is, 5-MeO-DMT reaches the maximum inotropic effect for a given concentration almost twice as fast as DMT.
The new finding of this study is the observation that DMT, and particularly 5-MeO-DMT, increases cardiac contractility via human 5-HT 4 receptors in the heart. DMT occurs in plants and one has used DMT in religious settings , . DMT can be found, for instance, in leaves of Diplopterys cabreana in Colombia and Ecuador , . There are drug preparations in Brazil (Amazonas region) that are called ayahuasca: they include parts from the plant Banisteriopsis caapi . DMT is degraded by the enzyme monoamine oxidase A (MAO-A) that physiologically occurs in the gastrointestinal tract: therefore users added plant extracts that contain MAO-A inhibitors, which also inhibit MAO-B at higher concentrations , . Moreover, DMT is present in about 50 plants in South America . The so called ayahuasca (a Quechua word translated as “vine of the souls” ) is a mixture of at least DMT and endogenous MAO inhibitors . In more detail, ayahuasca is said to be prepared by combining the bark of the plant Banisteriopsis caapi vine and the leaves of the Psychotria viridis bush , . This mixture is boiled for hours and then swallowed since pre-Columbian times by the indigenous tribes of the Amazon Basin . In Brazil, ayahuasca is used also for medical therapeutic purposes . In mice, the lethal dose of DMT is about 47 mg/kg when given intraperitoneally . From rodent studies, the LD 50 of DMT in humans was calculated as 1.6 mg/kg if applied intravenously . No human deaths have been reported due to ayahuasca, but when polypharmacy is involved and also 5-MeO-DMT has been taken, one death is reported in the literature . When DMT alone was administered by injection in humans (0.7–1.1 mg/kg body weight) they reported visual hallucinations . In humans, a placebo controlled study with intravenous application of DMT led to peak DMT plasma concentrations of about 0.38 µM, an increase in heart rate, and an increase in blood pressure . In our study, DMT hardly increased the beating rate in 5-HT 4 -TG mouse right atrial preparations, and the inotropic effects in human atria only began at 10 µM DMT. This discrepancy could simply be due to the difference between the in vivo application of Strassmann et al. and our in vitro application. Consequently, it can be assumed that heart rate and blood pressure are more sensitively affected by a combination of neuronal, vascular and cardiac effects of DMT after intravenous administration. In our experiments, only the direct effects on cardiac myocytes are responsible for any changes in force of contraction (or beating rate). Similarly, when ayahuasca preparations from the Amazon Basin were taken by human volunteers, heart rate and blood pressure augmented . Further it was shown that DMT binds to 5-HT 1A,1B,1D , and 5-HT 2A,2B,2C,6 and 7 receptors , . However, 5-MeO-DMT binds with a high affinity to 5-HT 1A,1B,1D , and 5-HT 5A,6 and 7 receptors, but with a markedly less affinity to 5-HT 2A,2B, and 2C receptors compared to DMT . Unfortunately, the binding affinities of DMT and 5-MeO-DMT to 5-HT 4 receptors are not known, and the investigation of these parameters was beyond the scope of our study. Therefore, we could only compare the functional effects of DMT, 5-MeO-DMT and 5-HT between 5-HT 4 -TG and WT mice. The present study demonstrated that DMT and 5-MeO-DMT exerted a concentration-dependent positive inotropic effect, and were less potent than 5-HT. Thus, we present data that 5-MeO-DMT and especially DMT, similar to cisapride, are partial agonists on 5-HT 4 receptors, but also noted that the kinetic seems different because it took more time to reach a plateau than 5-HT in 5-HT 4 -TG. Moreover, we could show that DMT and in particular 5-MeO-DMT can raise the phosphorylation state of phospholamban. This increased phosphorylation state of phospholamban may mediate the contractile effect of 5-MeO-DMT. Any augmented phosphorylation of phospholamban will lead to less inhibition of the Ca 2+ pump (SERCA): SERCA would be pumping faster. This would be expected to lead to a more rapid relaxation of the left atrial preparations from 5-HT 4 -TG mice but not WT mice. Serotonin elevated the phosphorylation state of phospholamban in cardiac preparations from of 5HT 4 -TG . Likewise, serotonin augmented the phosphorylation state of phospholamban in isolated atrial samples from patients . It should be noted, however, that there are some limitations of the study: For example, it is debatable whether the results obtained in mouse atria can be extrapolated to humans. The receptor density can be assumed to be different between 5-HT 4 -TG mice and humans and, furthermore, it is not clear if the cellular localization or signal transduction of the transgenic receptor is exactly the same as in human cardiomyocytes. However, this transgenic model has been successfully used several times to analyze cardiac effects of 5-HT and drugs or approved medications acting via 5-HT 4 receptors in comparison to human atria (bufotenin ; psilocybin ; prucalopride and cisapride ). Another limitation is that we were not able examine the EC 50 values of DMT and 5-MeO-DMT neither in our mouse model nor in human preparations. This would have been an opportunity to compare the effects of different drugs on the cardiac 5-HT 4 receptor, but now we can only estimate this. On the other hand, this could mean that at concentrations that induce effects in the central nervous system, cardiac side effects may be unlikely. In summary, our findings indicate that the hallucinogenic drugs DMT and 5-MeO-DMT can have cardiac side effects via human 5-HT 4 receptors at least under certain circumstances, such as an overdose of DMT or 5-MeO-DMT. This knowledge might become important once DMT and 5-MeO-DMT are described to treat depression.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Applicability of F-specific bacteriophage subgroups, PMMoV and crAssphage as indicators of source specific fecal contamination and viral inactivation in rivers in Japan | c0edb318-128a-4627-a6cb-2be939904bac | 10348522 | Microbiology[mh] | Enteric pathogens are contained in human and animal feces. The pathogens excreted from these animals are subjected to wastewater treatment. However, it is difficult to completely remove and inactivate all pathogens by the treatment, and the surviving pathogens are released into the water environment. Enteric viruses are known to be more resistant to water treatments than conventional fecal indicator bacteria, such as coliform bacteria and Escherichia coli ( E . coli ) [ – ]. Recreational activity in water environments such as rivers and lakes, where affected by untreated and treated wastewater, can be one of the causes of infection with enteric pathogens [ – ]. Some viruses, such as the pepper mild mottle virus (PMMoV), crAssphage, and F-specific bacteriophages (FPH), are expected to serve as indicators of viral contamination to complement the role of conventional fecal indicator bacteria [ – ]. Since the conventional fecal indicator bacteria cannot indicate fecal pollution sources, the microbial source tracking (MST) approach is used to identify sources of fecal pollution in water. Recently, 16S ribosomal RNA gene markers of host-specific Bacteroidales , which are predominantly present in human and animal gut flora, have been widely applied as MST tools [ – ]. PMMoV, crAssphage, and FPH subgroups, which are supposed to be specifically present in the feces of either or both humans and animals, are potentially useful MST tools [ – ]. PMMoV, a plant pathogen, is known to be highly abundant in human feces and is considered to be a good indicator of human fecal contamination [ , – ]. PMMoV is abundant in wastewater, surface water, and drinking water sources where human fecal pollution is present [ , , ]. However, PMMoV was also detected in chicken, seagull , and pig feces samples . CrAssphage is a bacteriophage that infects the human gut symbiont Bacteroides intestinalis . Like PMMoV, crAssphage is abundant in the human gut and is regarded as a potential indicator of human fecal contamination . CrAssphage has been identified in wastewater and surface water . Previous studies have reported that crAssphage was detected in slurry from the abattoirs of pigs, cattle, and poultry . Since PMMoV and crAssphage can be found in non-human feces, it is necessary to further investigate the host specificity of PMMoV and crAssphage. FPH are abundantly present in human and animal feces. They are classified into FDNA phages (FDNAPHs) and FRNA phages (FRNAPHs). The occurrence and fate of FDNAPHs have not been studied as well as FRNAPHs. Previous studies have identified the occurrences of FDNAPHs in municipal wastewater and pig, cow, and gull feces . FRNAPH is further classified into four genogroups, from genogroup I (GI) to GIV, which share size and morphology with enteric RNA viruses . GI- and GIV-FRNAPHs are reported to be present in pig, cattle, and poultry feces . GII- and GIII-FRNAPHs are present in human feces . However, some studies have presented data that are not in agreement with these findings . Consequently, the host specificity of FPH subgroups is still unclear. For the quantification of FRNAPH, reverse transcription (RT)-qPCR assays for each genogroup have been established . Genes from both inactive and infective viruses are quantified by these assays. A quantitative assay for infective FRNAPH genogroups has also been developed . In this assay, infective FRNAPH genogroups are propagated in liquid medium and detected via RT-PCR. By comparing these two assays, the viability of each FRANPH subgroup in the water environment can be estimated . Therefore, FRNAPH subgroups can be one of the potential indicators for assessing the infectivity of viruses in the environment, which would be helpful for risk management . The objective of this study is to evaluate the applicability of FPH subgroups, PMMoV, and crAssphage as indicators of fecal contamination and viral inactivation in river water affected by either or both human and pig feces. The host specificity of FRNAPH genogroups, FDNAPH, PMMoV, and crAssphage in the samples was evaluated by principal component analysis (PCA) along with other microbial indicators, such as 16S ribosomal RNA gene markers of host-specific Bacteroidales . Viabilities (infectivity indices) of FRNAPH genogroups were estimated by comparing their concentrations measured by the cultural and molecular assay.
Sample collection Surface water samples were collected monthly from November 2019 to November 2021 at three sites at the Oyabe River and its tributaries (O1, O2, and O3) and one site each at the Jinzu River and Sho River (J and S, respectively) in Toyama Prefecture, Japan ( ). Access to the sampling sites and sample collection were not restricted, and no permits were required for the field work in this study. In April and May 2020, the samples at the five sites were not collected, and in June 2020, the sample at S was not collected due to the COVID-19 pandemic. Consequently, 108 surface water samples were collected. O1 is located in the mainstream of the Oyabe River. There are approximately 40,000 people around and upstream of O1. Most of them are served with a municipal wastewater treatment plant discharging its treated effluent downstream of O1. The rest of them are served with individual and combined septic tank systems, which discharge their treated effluent into the river and can contribute to human fecal pollution at O1. O2 is located at the subsidiary stream of the Oyabe River and approximately 3 km upstream of O1. O2 is affected by wastewater from a pig farm located approximately 3 km upstream. There are no households around and upstream of O2, and therefore, it is unlikely that sources other than effluent from septic tanks used by pig farm workers contribute to human fecal contamination at O2. O3 is located at the subsidiary stream of the Oyabe river that confluences downstream of O1. There is another pig farm upstream of O3. Sites J and S are affected by treated effluent from municipal wastewater treatment plants upstream. At each site, 20 L and 250 mL grab samples were collected in presterilized polyethylene container and bottle, respectively. The sample in the polyethylene container (20 L) was subjected to a concentration and quantification of microbes, while that in the polyethylene bottle (250 mL) was subjected to plate counting assays for E . coli and FPH within 24 h after the collection( ). Concentration of microbes The sample in the 20 L polyethylene container was concentrated by hollow fiber ultrafiltration (HFUF) cartridge (APS-25SA; Asahi Kasei Medical Co., Osaka, Japan) and following centrifugation and polyethylene glycol (PEG) precipitation, as previously described [ – ]. Prior to the sample filtration, the HFUF was blocked by circulating 200 mL of fetal bovine serum (FBS) in the cartridge for 2 minutes and incubated overnight. After removing the remaining FBS, 20 L of the sample was passed through the HFUF cartridge. After filtration, the retentate (about 100–200 mL) was back-flushed into a collection bottle and mixed with a 100 × elution buffer (10% Tween 80, 1% sodium polyphosphate, 0.1% antifoam A). The mixture was then circulated in the HFUF cartridge for 2 minutes to further recover the bacteria and viruses remaining in the cartridges; then, it was finally recovered as a primary concentrate. The primary HFUF concentrate was centrifuged at 3,000 × g for 5 minutes, and the supernatant was subjected to a secondary concentration by PEG precipitation . The supernatant was mixed with PEG8000 and sodium chloride at final concentrations of 10% and 5.8% (w/v), respectively. The mixture was incubated overnight on a shaker at 4°C. Subsequently, the mixture was centrifuged at 10,000 × g for 30 minutes. After discarding the supernatant, the pellet was resuspended by 500 μL of phosphate buffer per 40 mL of the supernatant and collected as a secondary concentrate. The secondary concentrate was immediately subjected to the liquid culturing for integrated culture RT-PCR (IC-RT-PCR) and then frozen at -80°C. The frozen concentrate was further subjected to RNA and DNA extraction processes within 6 months. RNA or DNA extraction Murine norovirus (MNV) and coliphage phiX 174 were used as molecular process controls (MPCs) to evaluate potential underestimations of gene concentrations due to impurities during RNA extraction-RT-qPCR and DNA extraction-qPCR processes, respectively. Briefly, 140 μL of the secondary concentrate was spiked with 2 μL of MNV (8.5 × 10 5 copies/μL) and subjected to RNA extraction using a QIAamp viral RNA mini kit (Qiagen) to obtain a 60 μL of RNA extract, in accordance with the manufacturer’s instructions. Regarding DNA extraction, the secondary concentrate and pellet obtained by centrifuging the primary concentrate were mixed in a ratio that reproduced the ratio in the original sample. For example, the resultant volumes of the secondary concentrate and the pellet were 2.0 mL and 1.0 mL, respectively; they were mixed in a ratio of 2:1. Then, 100 μL of the mixture was spiked with 5 μL of phiX 174 (8.0 × 10 4 copies /μL) and subjected to DNA extraction using ISOSPIN Blood & Plasma DNA (NIPPON GENE) to obtain 100 μL of DNA extract, in accordance with the manufacturer’s instructions. The RNA and DNA extracts were diluted 10-fold with nuclease-free water to mitigate the effect of (RT-) PCR inhibition. Both the 10-fold diluted and undiluted RNA and DNA extracts were subjected to (RT-) qPCR assays. The spiked MNV, indigenous FRNAPH genogroups (GI-GIV-FRNAPH-gene), PMMoV, GI-Norovirus (NoV), and GII-NoV were quantified by RT-qPCR assays. The spiked phiX 174, host-specific Bacteroidales 16S ribosomal RNA gene markers (HF183, Pig-2-Bac, and BacHum), and crAssphage were quantified by qPCR assays. MPCs were also spiked into nuclease free water and subjected to nucleic acid extraction processes as the samples. These extracts were used for positive control and no template control, which did not show any positive signal, for MPCs and other target genes, respectively. DNA and RNA extracts were stored at 4°C for up to 1 month. (RT-) qPCR assay (RT-) qPCR assays were performed with a qTOWER 3 (analytik Jena). The sequences of primers and TaqMan ® probes were derived from previous studies ( ) [ , , – ]. To quantify viral RNA, one-step RT-qPCR was performed using a One Step PrimeScript ™ RT-PCR Kit (Perfect Real Time) (TaKaRa). A reaction mixture (25 μL) was prepared by mixing 2.5 μL of the RNA extract, 12.5 μL of a 2 × One Step RT-PCR Buffer III, 0.5 μL of a TaKaRa Ex Taq HS, 0.5 μL of a PrimeScript RT enzyme Mix II, 400 nM each of forward and reverse primers, 150 nM of a TaqMan ® probe, and nuclease-free water. The reaction was performed under the following thermal cycling conditions: RT reaction at 42°C for 5 minutes, initial denaturation and inactivation of the RT enzyme at 95°C for 10 seconds, followed by 50 cycles of amplification with denaturation at 95°C for 5 seconds, and annealing and extension at specific temperatures for each assay ( ) for 30 seconds. For the quantification of microbial DNA, qPCR was performed. A reaction mixture (25 μL) was prepared by mixing 2.5 μL of the DNA extract, 12.5 μL of a TaqMan ™ Gene Expression Master Mix (Thermo Fisher Scientific), 400 nM each of forward and reverse primers, 150 nM of a TaqMan probe, and nuclease-free water. qPCR was performed under the following thermal cycling conditions: DNA polymerase activation at 50°C for 2 minutes and 95°C for 10 minutes, followed by 50 cycles of amplification with denaturation at 95°C for 10 seconds and annealing and extension at specific temperatures for each assay( ) for 30 seconds. To obtain a calibration curve, a 10-fold serial dilution (concentrations ranged from 1.0 × 10 0 to 1.0 × 10 5 copies/reaction) of standard RNA or standard DNA containing the target sequence were subjected to (RT-) qPCR. If the resultant cycle threshold (Ct) value from a sample was corresponding to > 1 copy/reaction, the sample was determined to be positive for the target microbes. Absence of the positive signal in the no template control was confirmed in every (RT-) qPCR run to identify the potential contamination of the template into the reagents. Quantification of Infectious FRNAPH Genogroups and FDNAPHs IC-RT-PCR coupled with the most probable number (MPN) approach was applied to quantify infectious FRNAPH genogroups (GI-, GII-, GIII-, and GIV-FRNAPH-inf). For the IC-RT-PCR, 40, 4, 0.4, and 0.04 μL of the secondary concentrate were subjected to FPH propagation in triplicate for each volume. Each volume of the sample was mixed with 40 μL of tryptone glucose broth (TGB) containing Salmonella typhimurium WG49 (WG49) at the exponential growth phase, 20 mg/L of kanamycin, and 100 mg/L of nalidixic acid and was incubated overnight at 37°C. Presence/absence of infectious FRNAPH genogroups in each volume of the concentrate were determined by RT-PCR. To extract RNA, 3 μL of the sample culture was incubated at 95°C for 5 minutes. The RNA extract was then subjected to one-step RT-PCR using the One Step PrimeScript ™ RT-PCR Kit (Perfect Real Time) (TaKaRa), as described above. An infectious FRNAPH genogroup was considered positive if the resultant Ct value was at least 3.3 lower than that would be obtained with the conventional RT-qPCR, i.e., if the number of phages increased > 10-fold by the liquid culture process, and was less than 30. Otherwise, the amplification curve could have been attributed to inactive FRNAPHs, which could not multiply during the liquid cultivation process. The concentration of infectious FRNAPH genogroups was determined by referring to an MPN table for three 10-fold dilutions with three tubes at each dilution, provided by Blodgett (2010) . For detection of FPH and FDNAPH by a spotting assay, tryptone glucose agar (TGA) prepared by adding agar powder into TGB was distributed onto the plates and solidified. Then, 3 μL each of the sample cultures was dropped on it and incubated overnight at 37°C. The concentration of FPH (FPH-MPN) in the sample was determined by the MPN approach based on presence/absence of FPH in each sample culture determined based on plaques formed at the spotted area. Quantification of FDNAPH was conducted in the same manner, using TGA containing 200 mg/L of RNase (Sigma Aldrich). Quantification of Active E . coli and FPH by plate-counting assays Raw 50 mL samples were subjected to the plate counting assays to determine the number of colony forming units (CFU) of E . coli , and the plaque-forming unit (PFU) of FPH. E . coli in the sample was quantified with Chromocult ® Coliform Agar (Merck Darmstadt, Germany). In addition to the liquid cultivation-based assays described in the previous section, FPH was quantified by the conventional plaque assay (FPH-plaque), according to a previous study that employed S . Typhimurium WG49 as the host strain . Determining the efficiencies of nucleic acid extraction-(RT-) qPCR In this study, MNV and phiX 174 were used as molecular process controls, and their detection efficiencies were determined to estimate efficiencies of the RNA extraction-RT-qPCR and DNA extraction-qPCR processes, respectively. The detection efficiencies were estimated by comparing the observed gene concentration of MPC in nuclease-free water with that in the samples; namely, the detection efficiency (R) was calculated as follows: R = C/C 0 × 100 where C represents the observed gene concentration of MPC in a sample, and C 0 represents the observed gene concentration of MPC in the nuclease-free water. Statistical analysis Principal component analysis (PCA) and cluster analysis were conducted using IBM SPSS Statistics version 22 and R (version 4.2.0) respectively to evaluate host-specificity of the microbes among the collected samples. PCA was performed using a correlation matrix with a varimax rotation, retaining principal components (PCs) whose eigenvalues were greater than 1 . To reduce the potential negative effects of non-detected data, the microbes that showed positive rates of 24% or higher out of all the analyzed samples were selected for analysis (i.e., HF183, Pig-2-Bac, GI-, GII-, and GIV-FRNAPH-inf, GII- and GIV-FRNAPH-gene, crAssphage, PMMoV, E . coli , FPH-plaque, FPH-MPN, and FDNAPH). The non-detected results were replaced with values half the detection limits of the (RT-) qPCR, IC-RT-PCR, colony, and plaque assays (the detection limit values were approximately 1.0 × 10 0 copies/100 mL, 1.0 × 10 −1.2 MPN/100 mL, 1.0 × 10 0.9 CFU/100 mL, and 1.0 × 10 0.3 PFU/100 mL, respectively), respectively. Prior to the analysis, the log 10 -transformed data for each microbial target were normalized by subtracting and dividing their mean and standard deviation values, respectively. The normalized data was also used for the cluster analysis. Multiple comparison tests using Tukey’s honest significant differences test (HSD) and the Games-Howell method were conducted to compare the geometric mean concentrations of microbes at different sites. The geometric mean concentrations were calculated after excluding non-detected data. The multiple comparison test conducted only the data at sites where the microbe was detected at least thrice (i.e., HF183, BacHum, Pig-2-Bac, GI- and GII-FRNAPH-inf, GII-FRNAPH-gene, crAssphage, PMMoV, E . coli , FPH-plaque, FPH-MPN, and FDNAPH). Prior to the multiple comparison tests, an unpaired one-way ANOVA was conducted. If the one-way ANOVA indicated that the concentrations of each microbe are homoscedastic ( p < 0.05), Tukey’s HSD method was used as a post-hoc comparison; otherwise, the Games-Howell method was used. A chi-square test was conducted to compare the positive rate of microbes at different sites. The significance value was corrected to p < 0.005 to account for the number of comparisons by the Bonferroni method.
Surface water samples were collected monthly from November 2019 to November 2021 at three sites at the Oyabe River and its tributaries (O1, O2, and O3) and one site each at the Jinzu River and Sho River (J and S, respectively) in Toyama Prefecture, Japan ( ). Access to the sampling sites and sample collection were not restricted, and no permits were required for the field work in this study. In April and May 2020, the samples at the five sites were not collected, and in June 2020, the sample at S was not collected due to the COVID-19 pandemic. Consequently, 108 surface water samples were collected. O1 is located in the mainstream of the Oyabe River. There are approximately 40,000 people around and upstream of O1. Most of them are served with a municipal wastewater treatment plant discharging its treated effluent downstream of O1. The rest of them are served with individual and combined septic tank systems, which discharge their treated effluent into the river and can contribute to human fecal pollution at O1. O2 is located at the subsidiary stream of the Oyabe River and approximately 3 km upstream of O1. O2 is affected by wastewater from a pig farm located approximately 3 km upstream. There are no households around and upstream of O2, and therefore, it is unlikely that sources other than effluent from septic tanks used by pig farm workers contribute to human fecal contamination at O2. O3 is located at the subsidiary stream of the Oyabe river that confluences downstream of O1. There is another pig farm upstream of O3. Sites J and S are affected by treated effluent from municipal wastewater treatment plants upstream. At each site, 20 L and 250 mL grab samples were collected in presterilized polyethylene container and bottle, respectively. The sample in the polyethylene container (20 L) was subjected to a concentration and quantification of microbes, while that in the polyethylene bottle (250 mL) was subjected to plate counting assays for E . coli and FPH within 24 h after the collection( ).
The sample in the 20 L polyethylene container was concentrated by hollow fiber ultrafiltration (HFUF) cartridge (APS-25SA; Asahi Kasei Medical Co., Osaka, Japan) and following centrifugation and polyethylene glycol (PEG) precipitation, as previously described [ – ]. Prior to the sample filtration, the HFUF was blocked by circulating 200 mL of fetal bovine serum (FBS) in the cartridge for 2 minutes and incubated overnight. After removing the remaining FBS, 20 L of the sample was passed through the HFUF cartridge. After filtration, the retentate (about 100–200 mL) was back-flushed into a collection bottle and mixed with a 100 × elution buffer (10% Tween 80, 1% sodium polyphosphate, 0.1% antifoam A). The mixture was then circulated in the HFUF cartridge for 2 minutes to further recover the bacteria and viruses remaining in the cartridges; then, it was finally recovered as a primary concentrate. The primary HFUF concentrate was centrifuged at 3,000 × g for 5 minutes, and the supernatant was subjected to a secondary concentration by PEG precipitation . The supernatant was mixed with PEG8000 and sodium chloride at final concentrations of 10% and 5.8% (w/v), respectively. The mixture was incubated overnight on a shaker at 4°C. Subsequently, the mixture was centrifuged at 10,000 × g for 30 minutes. After discarding the supernatant, the pellet was resuspended by 500 μL of phosphate buffer per 40 mL of the supernatant and collected as a secondary concentrate. The secondary concentrate was immediately subjected to the liquid culturing for integrated culture RT-PCR (IC-RT-PCR) and then frozen at -80°C. The frozen concentrate was further subjected to RNA and DNA extraction processes within 6 months.
Murine norovirus (MNV) and coliphage phiX 174 were used as molecular process controls (MPCs) to evaluate potential underestimations of gene concentrations due to impurities during RNA extraction-RT-qPCR and DNA extraction-qPCR processes, respectively. Briefly, 140 μL of the secondary concentrate was spiked with 2 μL of MNV (8.5 × 10 5 copies/μL) and subjected to RNA extraction using a QIAamp viral RNA mini kit (Qiagen) to obtain a 60 μL of RNA extract, in accordance with the manufacturer’s instructions. Regarding DNA extraction, the secondary concentrate and pellet obtained by centrifuging the primary concentrate were mixed in a ratio that reproduced the ratio in the original sample. For example, the resultant volumes of the secondary concentrate and the pellet were 2.0 mL and 1.0 mL, respectively; they were mixed in a ratio of 2:1. Then, 100 μL of the mixture was spiked with 5 μL of phiX 174 (8.0 × 10 4 copies /μL) and subjected to DNA extraction using ISOSPIN Blood & Plasma DNA (NIPPON GENE) to obtain 100 μL of DNA extract, in accordance with the manufacturer’s instructions. The RNA and DNA extracts were diluted 10-fold with nuclease-free water to mitigate the effect of (RT-) PCR inhibition. Both the 10-fold diluted and undiluted RNA and DNA extracts were subjected to (RT-) qPCR assays. The spiked MNV, indigenous FRNAPH genogroups (GI-GIV-FRNAPH-gene), PMMoV, GI-Norovirus (NoV), and GII-NoV were quantified by RT-qPCR assays. The spiked phiX 174, host-specific Bacteroidales 16S ribosomal RNA gene markers (HF183, Pig-2-Bac, and BacHum), and crAssphage were quantified by qPCR assays. MPCs were also spiked into nuclease free water and subjected to nucleic acid extraction processes as the samples. These extracts were used for positive control and no template control, which did not show any positive signal, for MPCs and other target genes, respectively. DNA and RNA extracts were stored at 4°C for up to 1 month.
(RT-) qPCR assays were performed with a qTOWER 3 (analytik Jena). The sequences of primers and TaqMan ® probes were derived from previous studies ( ) [ , , – ]. To quantify viral RNA, one-step RT-qPCR was performed using a One Step PrimeScript ™ RT-PCR Kit (Perfect Real Time) (TaKaRa). A reaction mixture (25 μL) was prepared by mixing 2.5 μL of the RNA extract, 12.5 μL of a 2 × One Step RT-PCR Buffer III, 0.5 μL of a TaKaRa Ex Taq HS, 0.5 μL of a PrimeScript RT enzyme Mix II, 400 nM each of forward and reverse primers, 150 nM of a TaqMan ® probe, and nuclease-free water. The reaction was performed under the following thermal cycling conditions: RT reaction at 42°C for 5 minutes, initial denaturation and inactivation of the RT enzyme at 95°C for 10 seconds, followed by 50 cycles of amplification with denaturation at 95°C for 5 seconds, and annealing and extension at specific temperatures for each assay ( ) for 30 seconds. For the quantification of microbial DNA, qPCR was performed. A reaction mixture (25 μL) was prepared by mixing 2.5 μL of the DNA extract, 12.5 μL of a TaqMan ™ Gene Expression Master Mix (Thermo Fisher Scientific), 400 nM each of forward and reverse primers, 150 nM of a TaqMan probe, and nuclease-free water. qPCR was performed under the following thermal cycling conditions: DNA polymerase activation at 50°C for 2 minutes and 95°C for 10 minutes, followed by 50 cycles of amplification with denaturation at 95°C for 10 seconds and annealing and extension at specific temperatures for each assay( ) for 30 seconds. To obtain a calibration curve, a 10-fold serial dilution (concentrations ranged from 1.0 × 10 0 to 1.0 × 10 5 copies/reaction) of standard RNA or standard DNA containing the target sequence were subjected to (RT-) qPCR. If the resultant cycle threshold (Ct) value from a sample was corresponding to > 1 copy/reaction, the sample was determined to be positive for the target microbes. Absence of the positive signal in the no template control was confirmed in every (RT-) qPCR run to identify the potential contamination of the template into the reagents.
IC-RT-PCR coupled with the most probable number (MPN) approach was applied to quantify infectious FRNAPH genogroups (GI-, GII-, GIII-, and GIV-FRNAPH-inf). For the IC-RT-PCR, 40, 4, 0.4, and 0.04 μL of the secondary concentrate were subjected to FPH propagation in triplicate for each volume. Each volume of the sample was mixed with 40 μL of tryptone glucose broth (TGB) containing Salmonella typhimurium WG49 (WG49) at the exponential growth phase, 20 mg/L of kanamycin, and 100 mg/L of nalidixic acid and was incubated overnight at 37°C. Presence/absence of infectious FRNAPH genogroups in each volume of the concentrate were determined by RT-PCR. To extract RNA, 3 μL of the sample culture was incubated at 95°C for 5 minutes. The RNA extract was then subjected to one-step RT-PCR using the One Step PrimeScript ™ RT-PCR Kit (Perfect Real Time) (TaKaRa), as described above. An infectious FRNAPH genogroup was considered positive if the resultant Ct value was at least 3.3 lower than that would be obtained with the conventional RT-qPCR, i.e., if the number of phages increased > 10-fold by the liquid culture process, and was less than 30. Otherwise, the amplification curve could have been attributed to inactive FRNAPHs, which could not multiply during the liquid cultivation process. The concentration of infectious FRNAPH genogroups was determined by referring to an MPN table for three 10-fold dilutions with three tubes at each dilution, provided by Blodgett (2010) . For detection of FPH and FDNAPH by a spotting assay, tryptone glucose agar (TGA) prepared by adding agar powder into TGB was distributed onto the plates and solidified. Then, 3 μL each of the sample cultures was dropped on it and incubated overnight at 37°C. The concentration of FPH (FPH-MPN) in the sample was determined by the MPN approach based on presence/absence of FPH in each sample culture determined based on plaques formed at the spotted area. Quantification of FDNAPH was conducted in the same manner, using TGA containing 200 mg/L of RNase (Sigma Aldrich).
E . coli and FPH by plate-counting assays Raw 50 mL samples were subjected to the plate counting assays to determine the number of colony forming units (CFU) of E . coli , and the plaque-forming unit (PFU) of FPH. E . coli in the sample was quantified with Chromocult ® Coliform Agar (Merck Darmstadt, Germany). In addition to the liquid cultivation-based assays described in the previous section, FPH was quantified by the conventional plaque assay (FPH-plaque), according to a previous study that employed S . Typhimurium WG49 as the host strain .
In this study, MNV and phiX 174 were used as molecular process controls, and their detection efficiencies were determined to estimate efficiencies of the RNA extraction-RT-qPCR and DNA extraction-qPCR processes, respectively. The detection efficiencies were estimated by comparing the observed gene concentration of MPC in nuclease-free water with that in the samples; namely, the detection efficiency (R) was calculated as follows: R = C/C 0 × 100 where C represents the observed gene concentration of MPC in a sample, and C 0 represents the observed gene concentration of MPC in the nuclease-free water.
Principal component analysis (PCA) and cluster analysis were conducted using IBM SPSS Statistics version 22 and R (version 4.2.0) respectively to evaluate host-specificity of the microbes among the collected samples. PCA was performed using a correlation matrix with a varimax rotation, retaining principal components (PCs) whose eigenvalues were greater than 1 . To reduce the potential negative effects of non-detected data, the microbes that showed positive rates of 24% or higher out of all the analyzed samples were selected for analysis (i.e., HF183, Pig-2-Bac, GI-, GII-, and GIV-FRNAPH-inf, GII- and GIV-FRNAPH-gene, crAssphage, PMMoV, E . coli , FPH-plaque, FPH-MPN, and FDNAPH). The non-detected results were replaced with values half the detection limits of the (RT-) qPCR, IC-RT-PCR, colony, and plaque assays (the detection limit values were approximately 1.0 × 10 0 copies/100 mL, 1.0 × 10 −1.2 MPN/100 mL, 1.0 × 10 0.9 CFU/100 mL, and 1.0 × 10 0.3 PFU/100 mL, respectively), respectively. Prior to the analysis, the log 10 -transformed data for each microbial target were normalized by subtracting and dividing their mean and standard deviation values, respectively. The normalized data was also used for the cluster analysis. Multiple comparison tests using Tukey’s honest significant differences test (HSD) and the Games-Howell method were conducted to compare the geometric mean concentrations of microbes at different sites. The geometric mean concentrations were calculated after excluding non-detected data. The multiple comparison test conducted only the data at sites where the microbe was detected at least thrice (i.e., HF183, BacHum, Pig-2-Bac, GI- and GII-FRNAPH-inf, GII-FRNAPH-gene, crAssphage, PMMoV, E . coli , FPH-plaque, FPH-MPN, and FDNAPH). Prior to the multiple comparison tests, an unpaired one-way ANOVA was conducted. If the one-way ANOVA indicated that the concentrations of each microbe are homoscedastic ( p < 0.05), Tukey’s HSD method was used as a post-hoc comparison; otherwise, the Games-Howell method was used. A chi-square test was conducted to compare the positive rate of microbes at different sites. The significance value was corrected to p < 0.005 to account for the number of comparisons by the Bonferroni method.
Detection efficiency of MPCs To determine the efficiency of RNA extraction-RT-qPCR, MNV was spiked into the samples as a control and recovered by RT-qPCR from undiluted and 10-fold diluted RNA extracts. The detection efficiency of the spiked MNV was < 10% in 24 of the 108 undiluted RNA extracts. The detection efficiency in 8 out of the 24 extracts was improved to > 10% by 10-fold diluting the RNA extract, while the efficiency of the remaining 16 extracts was not noticeably improved by the dilution. These 16 samples are supposed to be affected by inefficient RNA extraction rather than RT-qPCR inhibition. Similarly, the detection efficiency of phiX 174, which was spiked to estimate the efficiency of DNA extraction-qPCR, resulted in < 10% in 14 out of the 108 undiluted DNA extracts. The detection efficiencies in 4 out of the 14 extracts were improved to > 10% by 10-fold diluting the DNA extract, while those in the remaining 10 samples were not noticeably improved by the dilution. These 10 samples are supposed to be affected by inefficient DNA extraction rather than qPCR inhibition. Regardless of the detection efficiencies, both undiluted and 10-fold diluted RNA and DNA extracts were subjected to microbial gene quantification by (RT-) qPCR, and one showed a higher observed concentration was selected. In this study, phiX174 was selected as the MPC because it is easy to handle and widely used as a model for enteric viruses . A possible limitation of phiX 174 as the MPC is that it is an ssDNA virus, while dsDNAs in crAssphage and bacteria were targeted in this study. The representativeness of phiX 174 to these microbes in the DNA extraction process needs to be clarified in the future. It is known that the sensitivity to (RT-) PCR inhibitor depends on PCR assays . Therefore, even if the MPCs could be efficiently quantified, quantification of other microbes can be underestimated. Considering this, in this study, both the 10-fold diluted and undiluted extracts were subjected to (RT-) qPCR assays for all the targets, and the one with a higher observed concentration was selected. Occurrence of MST markers and summarize the positive rates and observed concentrations, respectively, of each microbial target. The observed concentrations were calculated considering concentration factors and tested volumes but the detection efficiency of MPCs. GI-GIV-FRNAPH-inf in 5 samples collected in June 2021 was not quantified due to a problem with host growth. Similarly, FDNAPH in 14 samples collected in February, March and June 2020 was not quantified because the samples were used up before the analysis. BacHum in the samples collected after December 2020 was not quantified because HF183 showed higher detected concentration and detection frequency, indicating that BacHum is less informative, as described below. HF183, BacHum, and Pig-2-Bac were detected in 81% (88/108), 75% (36/48), and 82% (89/108) of the samples, respectively. At all sites except for O2, HF183 was detected with positive rates of 89–100%. The geometric mean concentration of HF183 at J was significantly higher than that at O1, O2, and O3 (Games-Howell, p < 0.05). On the other hand, at O2, HF183 was detected in 35% (8/23) and showed a significantly lower observed concentration than the other four sites. BacHum tended to be detected in slightly lower concentrations than HF183 at each site and showed a lower positive rate. Pig-2-Bac was detected in all samples (23/23) collected at O2 with a geometric mean concentration (10 4.4 copies/ 100 mL) significantly higher than at the other four sites (Games-Howell, p < 0.05). At O1 and O3, Pig-2-Bac was detected at relatively high frequencies—91% (21/23) and 96% (22/23), respectively. At J and S, Pig-2-Bac was detected in relatively low frequencies—80% (16/20) and 37% (7/19), respectively, with geometric mean concentrations (10 2.1 copies/100 mL and 10 2.3 , respectively) significantly lower than that at O2 (Games-Howell, p < 0.05). Occurrences of FRNAPH-gene and FRNAPH-inf GI-FRNAPH-gene was not detected at any of the sites. GII-FRNAPH-gene was detected in 82% (89/108) of the samples. GIII-FRNAPH-gene was detected at a relatively low frequency (5% (5/108)), while GIV-FRNAPH-gene was detected in 40% (43/108) of the samples. At O2, GII-FRNAPH-gene was detected in a frequency of 30% (7/23), while it was detected in 91–100% of the samples collected at other sites. Moreover, the geometric mean concentration of GII-FRNAPH-gene at O2 (10 0.63 copies/100 mL) was significantly lower than that at other sites ( p < 0.05, Games Howell). GIII-FRNAPH-gene was detected only in samples at O1 (9% (2/23)), O2 (4% (1/23)), and J (10% (2/20)). At O2, GIV-FRNAPH-gene was detected in all the samples (23/23) with a significantly higher geometric mean concentration (10 2.4 copies/100 mL) than at other sites ( p < 0.05, Games Howell). GIV-FRNAPH-gene was also detected in a relatively high frequency of 61% (14/23) at O1, while it was detected in lower frequencies of 5–13% at O3, J, and S. GI-FRNAPH-inf was detected in 73% (75/103) of all samples, with detection rates at each site other than S ranging from 64% to 89%. At S, it was detected in a significantly lower frequency (44% (8/18)) than at O1 and J ( p < 0.005, chi-square test). GII-FRNAPH-inf was detected in 43% (44/103) of all samples, with detection rates at each site other than O2 ranging from 27% to 61%. At O2 it was detected in a significantly lower frequency (14%, 3/22) than at the other sites ( p < 0.005, chi-square test). For GI- and GII-FRNAPH-inf, no significant differences in geometric mean concentrations among the sites were observed ( p < 0.05, Tukey). At O3, GII-FRNAPH-inf was detected in 27% of the samples (6/22), while GII-FRNAPH-gene was detected from all the samples. GIII-FRNAPH-inf was detected in only 2% (2/103) of all samples. GIV-FRNAPH-inf was detected in 24% (25/103) of all samples and in 0–23% of the samples at sites other than O2, where it was detected at a significantly higher frequency (73%, 16/23) than at other sites ( p < 0.005, chi-square test). Occurrences of CrAssphage, PMMoV, and NoVs CrAssphage was detected in 99% (107/108) of the samples. PMMoV was detected in all the samples (n = 108). GI- and GII-NoVs were not detected in any of the samples. The geometric mean concentrations of crAssphage did not differ significantly among the sites (10 3.2 –10 3.7 copies/100 mL) except for O2 (10 2.0 copies/100 mL) (Games-Howell, p < 0.05). The geometric mean concentration of PMMoV at J (10 4.2 copies/100 mL) was the highest among the sites and was significantly higher than that at sites O2, O3, and S (Games-Howell, p < 0.05). There was no significant difference in PMMoV geometric mean concentration in other pairs of sites (Games-Howell, p < 0.05). Occurrences of E . coli , FPH, and FDNAPH E . coli was detected in 100% (108/108) of the samples. FPH-MPN and FPH-plaque were detected in 100% (108/108) and 68% (73/108) of the samples, respectively. FDNAPH was detected in 93% (87/94) of the samples tested. The geometric mean concentrations of E . coli , FPH-MPN, FPH-plaque, and FDNAPH at O2 were significantly higher than those at other sites ( p < 0.05, E . coli , FPH-MPN, and FDNAPH: Tukey HSD, FPH-plaque: Games-Howell). Microbial Characterization by PCA To study the host specificity of potential indicators, PCA was employed. By this PCA, 58.1% of the total information in the data could be explained (PC1: 36.9%, PC2: 21.2%) ( ). The microbial targets were roughly separated into three groups based on the distribution of the plots in . In the fourth quadrant, especially in an area of x > 0.6 and y < 0.0, GIV-FRNAPH-gene and -inf, E . coli , FPH-plaque, FPH-MPN, and FDNAPH were plotted with Pig-2-Bac, suggesting their close relationship with porcine contamination. Among them, FDNAPH was most closely plotted to Pig-2-Bac. In the second quadrant, especially at around x = -0.4/ y = 0.7, GII-FRNAPH-gene and crAssphage were plotted with HF183, suggesting a close relationship with human contamination. In the first quadrant, other microbes, namely, GI-FRNAPH-inf, GII-FRNAPH-inf, and PMMoV, were plotted. The PCA in our study explained only 58.1% (PC1: 36.9%, PC2: 21.2%) of the total information. This might be caused by the third group of microbes (GI- and GII-FRNAPH-inf and PMMoV), which were not classified into either the pig group or the human group. By excluding the third-group data, the percentage of information explained by PCA was improved to be 67.5% (PC1: 40.0%, PC2: 27.5%) ( ). FDNAPH/FPH-MPN concentration ratios To clarify the host specificity of FDNAPH, which was suggested to be more associated with pig feces by PCA, FDNAPH/FPH-MPN concentration ratios (FDNAPH/FPH-MPN) were compared in . The geometric means of FDNAPH/FPH at J (10 −0.7 ± 0.7 ) and O2 (10 −0.9 ± 0.9 ), which are supposed to be affected mainly by human and pig feces, respectively, were almost equivalent. Infectivity indices of FRNAPH genogroups The infectivity and gene concentrations of the FRNAPH genogroups in the samples were determined via MPN-IC-RT-PCR and RT-qPCR assays, respectively. The observed concentrations of infectivity and gene of GII-FRNAPH at sites other than O2 and GIV-FRNAPH at O1 and O2, whose detection rates were 30% or higher, were compared ( and ). In , the infectivity index, which is defined as log 10 -transformed ratio of the concentration determined by infectivity assay (MPN-IC-RT-PCR) to the concentration determined by gene quantification assay (RT-qPCR), is also indicated. Regarding GIV-FRNAPH at O2, six samples were found to be positive only by RT-qPCR assay, suggesting that GIV-FRNAPH in the samples was highly inactivated. Notably, all six samples were collected in the warm months (July to November). At O1, GIV-FRNAPH tended to show higher infectivity index values during the cool months (December to June) ( ). The infectivity index of GII-FRNAPH showed variable values between -3.1 and 0.4 log 10 regardless of the season and site ( ).
To determine the efficiency of RNA extraction-RT-qPCR, MNV was spiked into the samples as a control and recovered by RT-qPCR from undiluted and 10-fold diluted RNA extracts. The detection efficiency of the spiked MNV was < 10% in 24 of the 108 undiluted RNA extracts. The detection efficiency in 8 out of the 24 extracts was improved to > 10% by 10-fold diluting the RNA extract, while the efficiency of the remaining 16 extracts was not noticeably improved by the dilution. These 16 samples are supposed to be affected by inefficient RNA extraction rather than RT-qPCR inhibition. Similarly, the detection efficiency of phiX 174, which was spiked to estimate the efficiency of DNA extraction-qPCR, resulted in < 10% in 14 out of the 108 undiluted DNA extracts. The detection efficiencies in 4 out of the 14 extracts were improved to > 10% by 10-fold diluting the DNA extract, while those in the remaining 10 samples were not noticeably improved by the dilution. These 10 samples are supposed to be affected by inefficient DNA extraction rather than qPCR inhibition. Regardless of the detection efficiencies, both undiluted and 10-fold diluted RNA and DNA extracts were subjected to microbial gene quantification by (RT-) qPCR, and one showed a higher observed concentration was selected. In this study, phiX174 was selected as the MPC because it is easy to handle and widely used as a model for enteric viruses . A possible limitation of phiX 174 as the MPC is that it is an ssDNA virus, while dsDNAs in crAssphage and bacteria were targeted in this study. The representativeness of phiX 174 to these microbes in the DNA extraction process needs to be clarified in the future. It is known that the sensitivity to (RT-) PCR inhibitor depends on PCR assays . Therefore, even if the MPCs could be efficiently quantified, quantification of other microbes can be underestimated. Considering this, in this study, both the 10-fold diluted and undiluted extracts were subjected to (RT-) qPCR assays for all the targets, and the one with a higher observed concentration was selected.
and summarize the positive rates and observed concentrations, respectively, of each microbial target. The observed concentrations were calculated considering concentration factors and tested volumes but the detection efficiency of MPCs. GI-GIV-FRNAPH-inf in 5 samples collected in June 2021 was not quantified due to a problem with host growth. Similarly, FDNAPH in 14 samples collected in February, March and June 2020 was not quantified because the samples were used up before the analysis. BacHum in the samples collected after December 2020 was not quantified because HF183 showed higher detected concentration and detection frequency, indicating that BacHum is less informative, as described below. HF183, BacHum, and Pig-2-Bac were detected in 81% (88/108), 75% (36/48), and 82% (89/108) of the samples, respectively. At all sites except for O2, HF183 was detected with positive rates of 89–100%. The geometric mean concentration of HF183 at J was significantly higher than that at O1, O2, and O3 (Games-Howell, p < 0.05). On the other hand, at O2, HF183 was detected in 35% (8/23) and showed a significantly lower observed concentration than the other four sites. BacHum tended to be detected in slightly lower concentrations than HF183 at each site and showed a lower positive rate. Pig-2-Bac was detected in all samples (23/23) collected at O2 with a geometric mean concentration (10 4.4 copies/ 100 mL) significantly higher than at the other four sites (Games-Howell, p < 0.05). At O1 and O3, Pig-2-Bac was detected at relatively high frequencies—91% (21/23) and 96% (22/23), respectively. At J and S, Pig-2-Bac was detected in relatively low frequencies—80% (16/20) and 37% (7/19), respectively, with geometric mean concentrations (10 2.1 copies/100 mL and 10 2.3 , respectively) significantly lower than that at O2 (Games-Howell, p < 0.05).
GI-FRNAPH-gene was not detected at any of the sites. GII-FRNAPH-gene was detected in 82% (89/108) of the samples. GIII-FRNAPH-gene was detected at a relatively low frequency (5% (5/108)), while GIV-FRNAPH-gene was detected in 40% (43/108) of the samples. At O2, GII-FRNAPH-gene was detected in a frequency of 30% (7/23), while it was detected in 91–100% of the samples collected at other sites. Moreover, the geometric mean concentration of GII-FRNAPH-gene at O2 (10 0.63 copies/100 mL) was significantly lower than that at other sites ( p < 0.05, Games Howell). GIII-FRNAPH-gene was detected only in samples at O1 (9% (2/23)), O2 (4% (1/23)), and J (10% (2/20)). At O2, GIV-FRNAPH-gene was detected in all the samples (23/23) with a significantly higher geometric mean concentration (10 2.4 copies/100 mL) than at other sites ( p < 0.05, Games Howell). GIV-FRNAPH-gene was also detected in a relatively high frequency of 61% (14/23) at O1, while it was detected in lower frequencies of 5–13% at O3, J, and S. GI-FRNAPH-inf was detected in 73% (75/103) of all samples, with detection rates at each site other than S ranging from 64% to 89%. At S, it was detected in a significantly lower frequency (44% (8/18)) than at O1 and J ( p < 0.005, chi-square test). GII-FRNAPH-inf was detected in 43% (44/103) of all samples, with detection rates at each site other than O2 ranging from 27% to 61%. At O2 it was detected in a significantly lower frequency (14%, 3/22) than at the other sites ( p < 0.005, chi-square test). For GI- and GII-FRNAPH-inf, no significant differences in geometric mean concentrations among the sites were observed ( p < 0.05, Tukey). At O3, GII-FRNAPH-inf was detected in 27% of the samples (6/22), while GII-FRNAPH-gene was detected from all the samples. GIII-FRNAPH-inf was detected in only 2% (2/103) of all samples. GIV-FRNAPH-inf was detected in 24% (25/103) of all samples and in 0–23% of the samples at sites other than O2, where it was detected at a significantly higher frequency (73%, 16/23) than at other sites ( p < 0.005, chi-square test).
CrAssphage was detected in 99% (107/108) of the samples. PMMoV was detected in all the samples (n = 108). GI- and GII-NoVs were not detected in any of the samples. The geometric mean concentrations of crAssphage did not differ significantly among the sites (10 3.2 –10 3.7 copies/100 mL) except for O2 (10 2.0 copies/100 mL) (Games-Howell, p < 0.05). The geometric mean concentration of PMMoV at J (10 4.2 copies/100 mL) was the highest among the sites and was significantly higher than that at sites O2, O3, and S (Games-Howell, p < 0.05). There was no significant difference in PMMoV geometric mean concentration in other pairs of sites (Games-Howell, p < 0.05).
E . coli , FPH, and FDNAPH E . coli was detected in 100% (108/108) of the samples. FPH-MPN and FPH-plaque were detected in 100% (108/108) and 68% (73/108) of the samples, respectively. FDNAPH was detected in 93% (87/94) of the samples tested. The geometric mean concentrations of E . coli , FPH-MPN, FPH-plaque, and FDNAPH at O2 were significantly higher than those at other sites ( p < 0.05, E . coli , FPH-MPN, and FDNAPH: Tukey HSD, FPH-plaque: Games-Howell).
To study the host specificity of potential indicators, PCA was employed. By this PCA, 58.1% of the total information in the data could be explained (PC1: 36.9%, PC2: 21.2%) ( ). The microbial targets were roughly separated into three groups based on the distribution of the plots in . In the fourth quadrant, especially in an area of x > 0.6 and y < 0.0, GIV-FRNAPH-gene and -inf, E . coli , FPH-plaque, FPH-MPN, and FDNAPH were plotted with Pig-2-Bac, suggesting their close relationship with porcine contamination. Among them, FDNAPH was most closely plotted to Pig-2-Bac. In the second quadrant, especially at around x = -0.4/ y = 0.7, GII-FRNAPH-gene and crAssphage were plotted with HF183, suggesting a close relationship with human contamination. In the first quadrant, other microbes, namely, GI-FRNAPH-inf, GII-FRNAPH-inf, and PMMoV, were plotted. The PCA in our study explained only 58.1% (PC1: 36.9%, PC2: 21.2%) of the total information. This might be caused by the third group of microbes (GI- and GII-FRNAPH-inf and PMMoV), which were not classified into either the pig group or the human group. By excluding the third-group data, the percentage of information explained by PCA was improved to be 67.5% (PC1: 40.0%, PC2: 27.5%) ( ).
To clarify the host specificity of FDNAPH, which was suggested to be more associated with pig feces by PCA, FDNAPH/FPH-MPN concentration ratios (FDNAPH/FPH-MPN) were compared in . The geometric means of FDNAPH/FPH at J (10 −0.7 ± 0.7 ) and O2 (10 −0.9 ± 0.9 ), which are supposed to be affected mainly by human and pig feces, respectively, were almost equivalent.
The infectivity and gene concentrations of the FRNAPH genogroups in the samples were determined via MPN-IC-RT-PCR and RT-qPCR assays, respectively. The observed concentrations of infectivity and gene of GII-FRNAPH at sites other than O2 and GIV-FRNAPH at O1 and O2, whose detection rates were 30% or higher, were compared ( and ). In , the infectivity index, which is defined as log 10 -transformed ratio of the concentration determined by infectivity assay (MPN-IC-RT-PCR) to the concentration determined by gene quantification assay (RT-qPCR), is also indicated. Regarding GIV-FRNAPH at O2, six samples were found to be positive only by RT-qPCR assay, suggesting that GIV-FRNAPH in the samples was highly inactivated. Notably, all six samples were collected in the warm months (July to November). At O1, GIV-FRNAPH tended to show higher infectivity index values during the cool months (December to June) ( ). The infectivity index of GII-FRNAPH showed variable values between -3.1 and 0.4 log 10 regardless of the season and site ( ).
Contamination sources at each site implied by MST markers Previous studies on MST markers have reported high sensitivity (100%) of Pig-2-Bac to pig feces . HF183 and BacHum have also been reported to exhibit high sensitivities (77–100%) to municipal wastewater samples [ , , ] and human feces [ – ]. In this study, HF183 and BacHum exhibited similar trends at each site. The forward primer for HF183 shares 16 bases with that of BacHum, and these two markers target the same region on the 16S rRNA gene [ , , ]. Considering that HF183 showed a higher detection rate and concentration, we mainly focused on HF183 as an MST marker of human fecal contamination in this study. At O2, Pig-2-Bac was detected from all the samples with a concentration significantly higher than those at the other four sites (10 4.6 copies/100 mL). This strongly suggests that O2 is affected by wastewater from a pig farm upstream. HF183 was also detected at O2, but its positive rate and concentrations were low (35% (8/23) and < 2.0 log 10 copies/100 mL, respectively). This implies that pig farm workers contributed to the contamination at O2, but the effect was limited. At O1, both HF183 and Pig-2-Bac were detected at high frequencies (96% and 91%, respectively) and concentrations. This implies that O1 was affected by both human and pig feces. Considering that the concentration of Pig-2-Bac at O1 was lower than that at O2, the porcine contamination at O1 was probably attributed to O2. Similarly, O3 and J, where Pig-2-Bac and HF183 were detected at high frequencies, seem to be affected by both municipal wastewater and pig farm wastewater. At S, HF183 was detected in a frequency of 89% (17/19) and concentration comparable to sites other than O2, while Pig-2-Bac was not frequently detected (37%, 7/19). This implies that S was mainly affected not by pigs but by municipal wastewater. Host specificity of FRNAPHs In PCA, GIV-FRNAPH-gene and -inf were plotted near Pig-2-Bac, suggesting that GIV-FRNAPH was excreted specifically from pigs. Similarly, GII-FRNAPH-gene was plotted near HF183, suggesting that GII-FRNAPH was excreted specifically from humans. These highlight the efficacy of GIV-FRNAPH and GII-FRNAPH-genes as indicators of pig and human fecal contamination, respectively. In contrast, GII-FRNAPHs-inf was not classified in either the human feces group or the pig feces group. This is because GII-FRNAPHs-inf was detected in low frequencies and concentrations even at sites affected mainly by humans, especially at O3. Analysis of the infectivity index strongly suggested that the GII-FRNAPHs in some samples were highly inactivated. This resulted in low detection frequencies. In the PCA in this study, GI-FRNAPH-inf was classified into neither human nor pig feces groups. In general, GI-FRNAPH is regarded as an indicator of non-human fecal contamination . However, GI-FRNAPH was reported to be a predominant FRNAPH genogroup in treated municipal wastewater, even though it was a minor genogroup before wastewater treatment . These findings suggest that GI-FRNAPH cannot be a useful MST tool. Host specificity of CrAssphage and PMMoV CrAssphage and PMMoV have been reported to be predominantly present in human feces . PMMoV is derived from pepper products and is excreted in human feces. In the PCA in this study, crAssphage was plotted near HF183. This indicates that crAssphage is a potentially useful indicator of human fecal contamination, in accordance with a previous study . On the other hand, PMMoV was classified into neither the human nor pig feces groups. This is probably because PMMoV was detected in all the samples, and its observed concentration at the site predominantly affected by pig (O2) was comparable to that at other sites, excluding J. A previous study reported that PMMoV was frequently detected in pig feces because pigs are usually fed leftover human food . These findings suggest that PMMoV is not useful for a specific indicator of human fecal contamination. Host specificity of FDNAPH E . coli and FPH are abundant in both human and other mammalian feces. E . coli , FPH-plaque, and FPH-MPN were plotted near Pig-2-Bac, which was predominantly detected at O2, in the PCA ( ). This is mainly because they were detected at relatively low concentrations at sites suggested to be strongly affected by human feces (J and S). It is obvious that they are present in human feces in high concentrations. This result can be attributed to the difference in fecal strength at O2 and other sites. FDNAPH was also plotted near Pig-2-Bac. However, FDNAPH/FPH at J and O2 were almost equivalent, suggesting that the proportion of FDNAPH among FPH populations does not differ in human and pig feces. Therefore, like E . coli and FPH, FDNAPH was plotted near Pig-2-Bac probably due to the difference in fecal strength at O2 and other sites. Furthermore, FDNAPH cannot be regarded as an MST tool to discriminate between human and pig fecal pollution, although a previous study has shown that FDNAPH was significantly more abundant in municipal wastewater than pig feces . Infectivity indices of FRNAPH genogroups The gene quantification assay (RT-qPCR) tends to show higher concentrations of FRNAPHs compared to the infectivity assay (IC-RT-PCR). This is because the gene quantification assay can detect both infectious and inactive viruses, while the infectivity assay can detect only infectious viruses. Thus, the degree of virus inactivation can be estimated by comparing the concentrations of genes and infectivity. GI-FRNAPH-gene was not detected in all samples, although GI-FRNAPH-Inf was detected in 73% of the samples. The sample volume subjected to RT-qPCR corresponds to approximately 50 ml of a raw water sample, while the volume subjected to IC-RT-PCR corresponds to approximately 2.6 L of a raw water sample. This difference in the test volume probably contributed to the result. The concentration of GI-FRNAPH-gene should be > 2 copies/100 mL to be detected, even if the detection efficiency was 100%. Considering that GI-FRNAPH-inf was < 10 MPN/100 mL in most of the samples, the result is understandable ( ). GI-FRNAPH-Inf was the most frequently detected genogroup among the infectious FRNAPH genogroups. In addition, a previous study has shown that GI-FRNAPHs are more resistant to higher temperatures, high pH, and chlorination than other genogroups . Therefore, GI-FRNAPH is the most stable genogroup in the natural environment. At O1 and O2, the infectivity index of GIV-FRNAPH tended to be lower during the warm months, although the trend was less clear at O1. In previous studies, infectious GIV-FRNAPH was detected only in winter in raw wastewater , and in surface water affected by effluents from municipal wastewater treatment plants . GIV-FRNAPH showed the lowest resistance to higher temperatures, pH levels, and ultraviolet irradiation , indicating that GIV-FRNAPH is relatively easily inactivated in the environment. Stronger environmental stresses during the warm months were likely to result in the higher infectivity index of GIV-FRNAPH. Such a trend was not clearly observed for the other genogroups in this study. The infectivity index of GII-FRNAPH, which was suggested to be specific to human fecal contamination, did not show a clear seasonal trend. This might be due to fluctuation in operating conditions in municipal wastewater treatment plants affecting the sites. A previous study has shown GI- and GII-FRNAPH tend to be more inactivated in the warm season than in the cool season in lake environments . Environmental stress and residence time are considered important factors affecting the infectivity index. The relationship between these factors and the infectivity index needs to be clarified in the future. Comparing the infectivity index of several genogroups of FRNAPHs or viruses may provide further insight into viral inactivation in the natural environment and by water treatments. Comparing indicators of fecal contamination and viral inactivation In this study, HF183, GII-FRNAPH-gene, and crAssphage showed specificity to human feces. At J, the positive rates of these three were almost 100%. The geometric mean concentrations of crAssphage (10 3.6 copies/100 mL) and HF183 (10 3.0 copies/100 mL) were almost comparable, but that of GII-FRNAPH-gene (10 1.6 copies/100 mL) was low, implying that crAssphage and HF183 are better indicators of human fecal contamination than GII-FRNAPH-gene. Pig-2-Bac, GIV-FRNAPH-gene, and GIV-FRNAPH-inf indicated host specificity to pig feces. At O2, the positive rates of Pig-2-Bac and GIV-FRNAPH-gene were 100%, while that of GIV-FRNAPH-inf was lower (73%). The geometric mean concentration of Pig-2-Bac (10 4.4 copies/100 mL) was 2 log 10 higher than that of GIV-FRNAPH-gene (10 2.3 copies/100 mL). These results imply that Pig-2-Bac is a better indicator of pig fecal contamination. However, GII- and GIV-FRNAPH can be useful tools to estimate the infectivity index of viruses. Limitations of the study Our PCA explained < 60% of the total information. In addition to the presence of the third group of microbes discussed above, low detection frequencies of GIV-FRNAPH at sites other than O2 might contribute to the result. Increasing the concentration factor is the most typical way to improve the detection frequency. However, considering that the observed concentration of Pig-2-Bac at O2 is > 2 log 10 higher than that at other sites, the required concentration factor can be unrealistically high. Similarly, GII-FRNAPH was detected in low frequencies at O2 and O3 (14% and 27%, respectively). PCA was also performed without these two microbes ( ), but still only 59.8% of the total information could be explained and positions of the plots remained almost unchanged. These suggest that the low detection frequencies of them was not a significant cause of the result. Additionally, although the PCA could separate microbes associated with pig and human fecal contaminations, it remains unclear that what were influenced by PC1 and PC2. GIV-FRNAPH-inf was detected more frequently during the cool months, while GIV-FRNAPH-gene did not show such a clear seasonality. In our PCA, these two targets were plotted close together. This indicates that the seasonal factor was influenced by the PCA. The cluster analysis was also performed. In the analysis, FPH subgroups formed one large cluster and the other microbes formed another large cluster, indicating that the cluster analysis was not effective in evaluating host specificity of microbes in our samples ( ).
Previous studies on MST markers have reported high sensitivity (100%) of Pig-2-Bac to pig feces . HF183 and BacHum have also been reported to exhibit high sensitivities (77–100%) to municipal wastewater samples [ , , ] and human feces [ – ]. In this study, HF183 and BacHum exhibited similar trends at each site. The forward primer for HF183 shares 16 bases with that of BacHum, and these two markers target the same region on the 16S rRNA gene [ , , ]. Considering that HF183 showed a higher detection rate and concentration, we mainly focused on HF183 as an MST marker of human fecal contamination in this study. At O2, Pig-2-Bac was detected from all the samples with a concentration significantly higher than those at the other four sites (10 4.6 copies/100 mL). This strongly suggests that O2 is affected by wastewater from a pig farm upstream. HF183 was also detected at O2, but its positive rate and concentrations were low (35% (8/23) and < 2.0 log 10 copies/100 mL, respectively). This implies that pig farm workers contributed to the contamination at O2, but the effect was limited. At O1, both HF183 and Pig-2-Bac were detected at high frequencies (96% and 91%, respectively) and concentrations. This implies that O1 was affected by both human and pig feces. Considering that the concentration of Pig-2-Bac at O1 was lower than that at O2, the porcine contamination at O1 was probably attributed to O2. Similarly, O3 and J, where Pig-2-Bac and HF183 were detected at high frequencies, seem to be affected by both municipal wastewater and pig farm wastewater. At S, HF183 was detected in a frequency of 89% (17/19) and concentration comparable to sites other than O2, while Pig-2-Bac was not frequently detected (37%, 7/19). This implies that S was mainly affected not by pigs but by municipal wastewater.
In PCA, GIV-FRNAPH-gene and -inf were plotted near Pig-2-Bac, suggesting that GIV-FRNAPH was excreted specifically from pigs. Similarly, GII-FRNAPH-gene was plotted near HF183, suggesting that GII-FRNAPH was excreted specifically from humans. These highlight the efficacy of GIV-FRNAPH and GII-FRNAPH-genes as indicators of pig and human fecal contamination, respectively. In contrast, GII-FRNAPHs-inf was not classified in either the human feces group or the pig feces group. This is because GII-FRNAPHs-inf was detected in low frequencies and concentrations even at sites affected mainly by humans, especially at O3. Analysis of the infectivity index strongly suggested that the GII-FRNAPHs in some samples were highly inactivated. This resulted in low detection frequencies. In the PCA in this study, GI-FRNAPH-inf was classified into neither human nor pig feces groups. In general, GI-FRNAPH is regarded as an indicator of non-human fecal contamination . However, GI-FRNAPH was reported to be a predominant FRNAPH genogroup in treated municipal wastewater, even though it was a minor genogroup before wastewater treatment . These findings suggest that GI-FRNAPH cannot be a useful MST tool.
CrAssphage and PMMoV have been reported to be predominantly present in human feces . PMMoV is derived from pepper products and is excreted in human feces. In the PCA in this study, crAssphage was plotted near HF183. This indicates that crAssphage is a potentially useful indicator of human fecal contamination, in accordance with a previous study . On the other hand, PMMoV was classified into neither the human nor pig feces groups. This is probably because PMMoV was detected in all the samples, and its observed concentration at the site predominantly affected by pig (O2) was comparable to that at other sites, excluding J. A previous study reported that PMMoV was frequently detected in pig feces because pigs are usually fed leftover human food . These findings suggest that PMMoV is not useful for a specific indicator of human fecal contamination.
E . coli and FPH are abundant in both human and other mammalian feces. E . coli , FPH-plaque, and FPH-MPN were plotted near Pig-2-Bac, which was predominantly detected at O2, in the PCA ( ). This is mainly because they were detected at relatively low concentrations at sites suggested to be strongly affected by human feces (J and S). It is obvious that they are present in human feces in high concentrations. This result can be attributed to the difference in fecal strength at O2 and other sites. FDNAPH was also plotted near Pig-2-Bac. However, FDNAPH/FPH at J and O2 were almost equivalent, suggesting that the proportion of FDNAPH among FPH populations does not differ in human and pig feces. Therefore, like E . coli and FPH, FDNAPH was plotted near Pig-2-Bac probably due to the difference in fecal strength at O2 and other sites. Furthermore, FDNAPH cannot be regarded as an MST tool to discriminate between human and pig fecal pollution, although a previous study has shown that FDNAPH was significantly more abundant in municipal wastewater than pig feces .
The gene quantification assay (RT-qPCR) tends to show higher concentrations of FRNAPHs compared to the infectivity assay (IC-RT-PCR). This is because the gene quantification assay can detect both infectious and inactive viruses, while the infectivity assay can detect only infectious viruses. Thus, the degree of virus inactivation can be estimated by comparing the concentrations of genes and infectivity. GI-FRNAPH-gene was not detected in all samples, although GI-FRNAPH-Inf was detected in 73% of the samples. The sample volume subjected to RT-qPCR corresponds to approximately 50 ml of a raw water sample, while the volume subjected to IC-RT-PCR corresponds to approximately 2.6 L of a raw water sample. This difference in the test volume probably contributed to the result. The concentration of GI-FRNAPH-gene should be > 2 copies/100 mL to be detected, even if the detection efficiency was 100%. Considering that GI-FRNAPH-inf was < 10 MPN/100 mL in most of the samples, the result is understandable ( ). GI-FRNAPH-Inf was the most frequently detected genogroup among the infectious FRNAPH genogroups. In addition, a previous study has shown that GI-FRNAPHs are more resistant to higher temperatures, high pH, and chlorination than other genogroups . Therefore, GI-FRNAPH is the most stable genogroup in the natural environment. At O1 and O2, the infectivity index of GIV-FRNAPH tended to be lower during the warm months, although the trend was less clear at O1. In previous studies, infectious GIV-FRNAPH was detected only in winter in raw wastewater , and in surface water affected by effluents from municipal wastewater treatment plants . GIV-FRNAPH showed the lowest resistance to higher temperatures, pH levels, and ultraviolet irradiation , indicating that GIV-FRNAPH is relatively easily inactivated in the environment. Stronger environmental stresses during the warm months were likely to result in the higher infectivity index of GIV-FRNAPH. Such a trend was not clearly observed for the other genogroups in this study. The infectivity index of GII-FRNAPH, which was suggested to be specific to human fecal contamination, did not show a clear seasonal trend. This might be due to fluctuation in operating conditions in municipal wastewater treatment plants affecting the sites. A previous study has shown GI- and GII-FRNAPH tend to be more inactivated in the warm season than in the cool season in lake environments . Environmental stress and residence time are considered important factors affecting the infectivity index. The relationship between these factors and the infectivity index needs to be clarified in the future. Comparing the infectivity index of several genogroups of FRNAPHs or viruses may provide further insight into viral inactivation in the natural environment and by water treatments.
In this study, HF183, GII-FRNAPH-gene, and crAssphage showed specificity to human feces. At J, the positive rates of these three were almost 100%. The geometric mean concentrations of crAssphage (10 3.6 copies/100 mL) and HF183 (10 3.0 copies/100 mL) were almost comparable, but that of GII-FRNAPH-gene (10 1.6 copies/100 mL) was low, implying that crAssphage and HF183 are better indicators of human fecal contamination than GII-FRNAPH-gene. Pig-2-Bac, GIV-FRNAPH-gene, and GIV-FRNAPH-inf indicated host specificity to pig feces. At O2, the positive rates of Pig-2-Bac and GIV-FRNAPH-gene were 100%, while that of GIV-FRNAPH-inf was lower (73%). The geometric mean concentration of Pig-2-Bac (10 4.4 copies/100 mL) was 2 log 10 higher than that of GIV-FRNAPH-gene (10 2.3 copies/100 mL). These results imply that Pig-2-Bac is a better indicator of pig fecal contamination. However, GII- and GIV-FRNAPH can be useful tools to estimate the infectivity index of viruses.
Our PCA explained < 60% of the total information. In addition to the presence of the third group of microbes discussed above, low detection frequencies of GIV-FRNAPH at sites other than O2 might contribute to the result. Increasing the concentration factor is the most typical way to improve the detection frequency. However, considering that the observed concentration of Pig-2-Bac at O2 is > 2 log 10 higher than that at other sites, the required concentration factor can be unrealistically high. Similarly, GII-FRNAPH was detected in low frequencies at O2 and O3 (14% and 27%, respectively). PCA was also performed without these two microbes ( ), but still only 59.8% of the total information could be explained and positions of the plots remained almost unchanged. These suggest that the low detection frequencies of them was not a significant cause of the result. Additionally, although the PCA could separate microbes associated with pig and human fecal contaminations, it remains unclear that what were influenced by PC1 and PC2. GIV-FRNAPH-inf was detected more frequently during the cool months, while GIV-FRNAPH-gene did not show such a clear seasonality. In our PCA, these two targets were plotted close together. This indicates that the seasonal factor was influenced by the PCA. The cluster analysis was also performed. In the analysis, FPH subgroups formed one large cluster and the other microbes formed another large cluster, indicating that the cluster analysis was not effective in evaluating host specificity of microbes in our samples ( ).
In this study, we aimed to evaluate the applicability of FPH subgroups, PMMoV, and crAssphage as indicators of fecal contamination. The host specificity of the FPH subgroups, PMMoV, and crAssphage was evaluated by PCA. We also evaluated the applicability of FRNAPH genogroups as an indicator of viral inactivation by comparing their concentrations measured by cultural and molecular assays. PCA indicated that GIV-FRNAPH-gene and GIV-FRNAPH-inf were specific to pig feces. PCA also indicated that GII-FRNAPH-gene and crAssphage were specific to human feces. However, PMMoV, GI-FRNAPH-inf, and FDNAPH were suggested not to be specific to either human or pig feces. The infectivity index indicated that GIV-FRNAPH was highly inactivated during the warm months (July to November). Comparing the infectivity index of several FRNAPH genogroups or viruses may provide further insight into viral inactivation in the natural environment and by water treatments.
S1 Fig Infectivity index of GII- and GIV-FRNAPHs in the surface water samples at each site. Infectivity index of GII- (A, B, C, and D) and GIV-FRNAPHs (E) in the surface water samples at each site, where the target FRNAPH subgroup showed detection rates of 30% or higher. Infectivity index (Inf. index) was defined as the differences between the log 10 -transformed concentrations of infectious FRNAPHs (MPN) and their gene (copies) as indicated by diagonal lines. The circles represent samples collected during the warm months (from July to November), and the triangles represent samples collected during the cool months (from December to June). “N.D.” on the axis means “not detected.” The white plot on the axis indicates that the sample was negative in cultural and/or RT-qPCR assays. (TIF) Click here for additional data file. S2 Fig Principal component analysis (PCA) of observed concentrations of microbial targets excluding those classified in the third group (GI-FRNAPH-inf, GII-FRNAPH-inf, and PMMoV) in the samples (n = 94). The analysis employs concentrations of microbial targets with 24% or higher positive rates excluding those classified in the third group by PCA shown on (i.e., HF183, Pig-2-Bac, GII-FRNAPH-gene, GIV-FRNAPH-gene, GIV-FRNAPH-inf, crAssphage, PMMoV, E. coli, FPH-plaque, FPH-MPN, and FDNAPH). Circles (○) and triangles (Δ) refer to the indicators quantified based on gene and viability, respectively. The vertical and horizontal axes indicate principal components (PC) 1 and 2, which explained 40.0% and 27.5% of the total information, respectively. (TIF) Click here for additional data file. S3 Fig Principal component analysis (PCA) of observed concentrations of microbial targets that showed 68% or higher positive rates in the samples (n = 94). The analysis employs concentrations of microbial targets with 68% or higher positive rates (i i.e., HF183, Pig-2-Bac, GI-FRNAPH-inf, GII-FRNAPH-gene, crAssphage, PMMoV, E. coli, FPH-plaque, FPH-MPN, and FDNAPH). Circles (○) and triangles (Δ) refer to the indicators quantified based on gene and viability, respectively. The vertical and horizontal axes indicate principal components (PC) 1 and 2, which explained 34.8% and 25.0% of the total information, respectively. (TIF) Click here for additional data file. S4 Fig A dendrogram from the cluster analysis using Ward’s hierarchical method for the observed concentrations of microbial targets (n = 94). (TIF) Click here for additional data file. S1 Table Primer and TaqMan probe sequences used for (RT-) qPCR and IC-RT-PCR assays in this study. (DOCX) Click here for additional data file. S2 Table Dates of sample collection. (DOCX) Click here for additional data file. S3 Table The observed concentrations of microbes (i.e., HF183, BacHum, Pig-2-Bac, GI-, GII- aND GIV- FRNAPH-inf, GII- aND GIV-FRNAPH-gene, crAssphage, PMMoV, E. coli, FPH-plaque, FPH-MPN, FDNAPH) in each sample. (DOCX) Click here for additional data file. S4 Table MIQE guideline essential information checklist. (DOCX) Click here for additional data file.
|
Terminal Protection of Small Molecule-Linked DNA for Small Molecule–Protein Interaction Assays | 6e79a3e0-aa39-4807-9885-6db1e45d9e30 | 4013559 | Pathology[mh] | Introduction The affinity binding of small molecules with their target proteins relies on noncovalent but specific interactions, and small molecules that interact with proteins in this way serve as the affinity ligands of the associated proteins . Specific interactions between proteins and small-molecule ligands are fundamental to the regulations of most physiological processes of organisms . Small molecule–protein interaction assays, thus, are critical for revealing the mechanisms of many important physiological processes. Besides, the interaction assay techniques also represent a major avenue to drug screening, biomarker analysis in clinic, and public safety monitoring . Some classic techniques, including affinity chromatography , kinetic capillary electrophoresis , fluorescence polarization and surface plasmon resonance , have been developed for the detection of small molecule–protein interactions. However, problems such as the complex fixation of proteins or small molecules, limited sensitivity, potential nonspecific adsorption, or the requirement of sophisticated instruments frequently limit their widespread application. Beyond the aforementioned methods, Jiang and colleagues proposed a completely different concept of terminal protection assay for the investigation of small molecule–protein interactions . They found an interesting phenomenon: the binding of a protein to a small molecule moiety at one terminus of a DNA module could protect the DNA from digestion by nucleases. Based on this finding, Jiang et al. as well as researchers from other groups have developed a series of methods for sensitive and specific detection of the interactions between proteins and small molecules . The advantage of using terminal protection assay is that it translates the binding of small molecules to proteins into the presence of a specific DNA sequence, therefore enabling the detection of small molecule–protein interaction using various DNA sequence amplification and detection technologies . This review traces the principles of terminal protection assay of small molecule-linked DNA and their applications in small molecule–protein interaction assay. In addition, some methods that share the concept of DNA protection assay are also discussed.
Terminal Protection Assay of Small Molecule-Linked DNA The primary hypothesis of terminal protection of small molecule-linked DNA was based on common biological effects of molecular recognition that biomacromolecules can have dramatic steric hindrance and thus inhibit the reactive activity at its binding site due to its large size. As illustrated in , binding of a protein to a small molecule ligand may inhibit enzymatic reactions near the ligand because steric hindrance prevents an enzyme approaching the reaction site. Specifically, Jiang and colleagues found a general phenomenon that small molecule-linked DNA was still reactive with its exonucleases despite the introduction of a small molecule at the nucleotide of the DNA, but the substrate activity of DNA would be inhibited if there was significant steric hindrance around the reaction site of exonucleases, thus protecting the DNA from digestion . Based on this finding, they designed two terminal protection strategies for single-stranded DNA (ssDNA) and double-stranded DNA (dsDNA), respectively. 2.1. Terminal Protection of Small Molecule-Linked Single-Stranded DNA Jiang et al. first investigated the behavior of exonuclease in reacting with small molecule-linked ssDNA in terminal protection assay . Exonucleases are one kind of nuclease cleaving nucleotides from either the 3′ or 5′ end of a polynucleotide chain into mononucleotides, and exonuclease I (Exo I) can selectively degrade ssDNA to deoxyribonucleosides in a 3′–5′ direction. They found that after modifying a small molecule at the 3′ end of the ssDNA sequence, for example, a folate molecule on the terminal nucleotide at 3′ end, the ssDNA was still active for Exo I and can be hydrolyzed by this nuclease, as shown in . However, with the binding of protein, as the steric hindrance at the 3′ termini of the ssDNA was dramatically increased, Exo I was prevented from accessing its reacting site and the hydrolysis reaction was inhibited. That is, the protein–small molecule interaction event protected the small molecule-labeled ssDNA from digestion by Exo I. For a sensitive small molecule–protein interaction assay, Jiang et al. proposed an electrochemical strategy by wrapping ligand-labeled DNA on single-walled carbon nanotubes (SWNTs) and the gold electrode was modified with a dense self-assembled monolayer (SAM) of 16-mercaptohexadecanoic acid (MHA) which isolated the electrode from the solution, blocking the electron transfer between redox solutes and resulting in no electrochemical signal. When Exo I was added into the reaction system, the folate-labeled DNA was hydrolyzed from its 3′ end and naked SWNTs were left, which could be isolated from the solution and assembled on the hydrophobic SAM. Due to the mediating electron transfer effect of SWNTs between the electrode and the electroactive substance, ferrocenecarboxylic acid, an enhanced redox current signal was observed. In the presence of target protein, folate receptor (FR), the interaction between FR and folate-labeled DNA prevented the Exo I-catalyzed hydrolysis reaction, thus leaving intact ssDNA on the surface of the SWNTs. As a result, no significant electrochemical signal was obtained. The proposed terminal protection assay strategy was demonstrated to be very sensitive in protein–small molecule interaction detection due to the natural electrochemical effect of SWNTs. A linear correlation in the concentration range from 10 pM to 1.0 nM of FR and a detection limit of 3 pM was achieved. Such a low detection limit was desirable for clinical applications. This strategy offeres a novel versatile platform for small molecule–protein interaction assays and a new means for rapid isolation of synthetic small-molecule ligands from libraries of small molecule-linked ssDNA. Another novel electrochemical strategy based on the small molecule–linked DNA for interaction between small molecule and protein via a solid-state Ag/AgCl process, was developed by Chai and coworkers . Biotin-labeled DNA was captured on the gold electrode and the binding of streptavidin (SA) prevented the DNA from being degraded by Exo I. Then the positively charged AuNPs was absorbed on the negatively charged DNA, which in turn catalyzed the silver deposited on AuNPs. The silver on AuNPs could be detected through a sensitive Ag/AgCl transformation process, producing an amplified electrochemical signal. In the absence of target protein SA, the biotin-labeled DNA was digested by Exo I and the AuNPs could not be absorbed, resulting in an obvious signal change. This method offered a detection limit as low as 10 pM in biotin–SA binding assays. The high sensitivity was attributed to the silver enhancement catalyzed by AuNPs and solid-state Ag/AgCl detection mechanism. In addition to the electrochemical strategies on the basis of terminal protection assay, a graphene oxide-based fluorescent biosensor was also constructed for small molecule–protein interaction detection by Pang and his colleagues . In this method, they designed a fluorophore-labeled ssDNA, which can adsorb on the surface of graphene oxide (GO) with fluorescence quenching. Another small molecule-labeled ssDNA that was complementary to the fluorophore-labeled ssDNA, was first incubated with the target protein followed by the addition of Exo I. Since the small molecule–protein interaction can inhibit the hydrolysis reaction of Exo I, intact small molecule-labeled ssDNA was left in the reaction solution and can be hybridized with the fluorophore-labeled ssDNA. Because the adsorbtion efficiency of dsDNA by GO was extremely low, the fluorescence was reserved and quantitatively indicated the small molecule–protein interaction event. Based on the fluorescence quenching and ssDNA adsorbing properties of GO as well as terminal protection assay principle, Su et al. developed a similar strategy successfully used for small molecule–protein interaction assay. The two strategies both adopted graphene oxide as an energy receptor in resonance energy transfer, and made use of it to perform signal transduction. 2.2. Terminal Protection of Small Molecule-Linked Double-Stranded DNA Terminal protection assay strategies were also constructed using small molecule-linked dsDNA. There are diverse exonucleases that could act on dsDNA, such as exonuclease III (Exo III), lambda exonuclease (Exo λ) and so on. The preferred DNA substrates of Exo III are blunt or recessed 3′ termini. Exo III is not active on ssDNA, and 3′ protruding termini are resistant to cleavage. Based on these properties, depicts two strategies of terminal protection of dsDNA for small molecule–protein interaction detection using Exo III. In , the dsDNA is designed to be blunt at its two 3′ termini with a small molecule labeled at the 3′ termini of one strand. In the absence of target protein, the two 3′ termini of dsDNA were both reactive to Exo III, so after the hydrolysis reaction, both strands of the DNA are digested. In contrast, the small molecule–protein interaction event inactivates the small molecule-labeled strand for Exo III, thus preventing this strand from digestion. When the 3′ termini of the unlabeled strand is overhanging as shown in , Exo III digests only the small molecule-labeled strand and reserves the unlabeled strand. On the other hand, protein binding events could protect the intact dsDNA from digestion by Exo III. These strategies translate the detection of small molecule–protein interaction into probing a dsDNA or ssDNA. For signal transduction, Jiang et al. designed a hairpin structure DNA with a small molecule labeled at its 3′ termini. In small molecule–protein interaction assay, the small molecule binding protein precluded the hydrolysis of the hairpin DNA against Exo III, resulting in an intact hairpin which was then stained by SYBR Green I. The small molecule–protein interaction events, thus, could be quantitatively explored by the fluorescence intensity of the dsDNA stain .
Terminal Protection of Small Molecule-Linked Single-Stranded DNA Jiang et al. first investigated the behavior of exonuclease in reacting with small molecule-linked ssDNA in terminal protection assay . Exonucleases are one kind of nuclease cleaving nucleotides from either the 3′ or 5′ end of a polynucleotide chain into mononucleotides, and exonuclease I (Exo I) can selectively degrade ssDNA to deoxyribonucleosides in a 3′–5′ direction. They found that after modifying a small molecule at the 3′ end of the ssDNA sequence, for example, a folate molecule on the terminal nucleotide at 3′ end, the ssDNA was still active for Exo I and can be hydrolyzed by this nuclease, as shown in . However, with the binding of protein, as the steric hindrance at the 3′ termini of the ssDNA was dramatically increased, Exo I was prevented from accessing its reacting site and the hydrolysis reaction was inhibited. That is, the protein–small molecule interaction event protected the small molecule-labeled ssDNA from digestion by Exo I. For a sensitive small molecule–protein interaction assay, Jiang et al. proposed an electrochemical strategy by wrapping ligand-labeled DNA on single-walled carbon nanotubes (SWNTs) and the gold electrode was modified with a dense self-assembled monolayer (SAM) of 16-mercaptohexadecanoic acid (MHA) which isolated the electrode from the solution, blocking the electron transfer between redox solutes and resulting in no electrochemical signal. When Exo I was added into the reaction system, the folate-labeled DNA was hydrolyzed from its 3′ end and naked SWNTs were left, which could be isolated from the solution and assembled on the hydrophobic SAM. Due to the mediating electron transfer effect of SWNTs between the electrode and the electroactive substance, ferrocenecarboxylic acid, an enhanced redox current signal was observed. In the presence of target protein, folate receptor (FR), the interaction between FR and folate-labeled DNA prevented the Exo I-catalyzed hydrolysis reaction, thus leaving intact ssDNA on the surface of the SWNTs. As a result, no significant electrochemical signal was obtained. The proposed terminal protection assay strategy was demonstrated to be very sensitive in protein–small molecule interaction detection due to the natural electrochemical effect of SWNTs. A linear correlation in the concentration range from 10 pM to 1.0 nM of FR and a detection limit of 3 pM was achieved. Such a low detection limit was desirable for clinical applications. This strategy offeres a novel versatile platform for small molecule–protein interaction assays and a new means for rapid isolation of synthetic small-molecule ligands from libraries of small molecule-linked ssDNA. Another novel electrochemical strategy based on the small molecule–linked DNA for interaction between small molecule and protein via a solid-state Ag/AgCl process, was developed by Chai and coworkers . Biotin-labeled DNA was captured on the gold electrode and the binding of streptavidin (SA) prevented the DNA from being degraded by Exo I. Then the positively charged AuNPs was absorbed on the negatively charged DNA, which in turn catalyzed the silver deposited on AuNPs. The silver on AuNPs could be detected through a sensitive Ag/AgCl transformation process, producing an amplified electrochemical signal. In the absence of target protein SA, the biotin-labeled DNA was digested by Exo I and the AuNPs could not be absorbed, resulting in an obvious signal change. This method offered a detection limit as low as 10 pM in biotin–SA binding assays. The high sensitivity was attributed to the silver enhancement catalyzed by AuNPs and solid-state Ag/AgCl detection mechanism. In addition to the electrochemical strategies on the basis of terminal protection assay, a graphene oxide-based fluorescent biosensor was also constructed for small molecule–protein interaction detection by Pang and his colleagues . In this method, they designed a fluorophore-labeled ssDNA, which can adsorb on the surface of graphene oxide (GO) with fluorescence quenching. Another small molecule-labeled ssDNA that was complementary to the fluorophore-labeled ssDNA, was first incubated with the target protein followed by the addition of Exo I. Since the small molecule–protein interaction can inhibit the hydrolysis reaction of Exo I, intact small molecule-labeled ssDNA was left in the reaction solution and can be hybridized with the fluorophore-labeled ssDNA. Because the adsorbtion efficiency of dsDNA by GO was extremely low, the fluorescence was reserved and quantitatively indicated the small molecule–protein interaction event. Based on the fluorescence quenching and ssDNA adsorbing properties of GO as well as terminal protection assay principle, Su et al. developed a similar strategy successfully used for small molecule–protein interaction assay. The two strategies both adopted graphene oxide as an energy receptor in resonance energy transfer, and made use of it to perform signal transduction.
Terminal Protection of Small Molecule-Linked Double-Stranded DNA Terminal protection assay strategies were also constructed using small molecule-linked dsDNA. There are diverse exonucleases that could act on dsDNA, such as exonuclease III (Exo III), lambda exonuclease (Exo λ) and so on. The preferred DNA substrates of Exo III are blunt or recessed 3′ termini. Exo III is not active on ssDNA, and 3′ protruding termini are resistant to cleavage. Based on these properties, depicts two strategies of terminal protection of dsDNA for small molecule–protein interaction detection using Exo III. In , the dsDNA is designed to be blunt at its two 3′ termini with a small molecule labeled at the 3′ termini of one strand. In the absence of target protein, the two 3′ termini of dsDNA were both reactive to Exo III, so after the hydrolysis reaction, both strands of the DNA are digested. In contrast, the small molecule–protein interaction event inactivates the small molecule-labeled strand for Exo III, thus preventing this strand from digestion. When the 3′ termini of the unlabeled strand is overhanging as shown in , Exo III digests only the small molecule-labeled strand and reserves the unlabeled strand. On the other hand, protein binding events could protect the intact dsDNA from digestion by Exo III. These strategies translate the detection of small molecule–protein interaction into probing a dsDNA or ssDNA. For signal transduction, Jiang et al. designed a hairpin structure DNA with a small molecule labeled at its 3′ termini. In small molecule–protein interaction assay, the small molecule binding protein precluded the hydrolysis of the hairpin DNA against Exo III, resulting in an intact hairpin which was then stained by SYBR Green I. The small molecule–protein interaction events, thus, could be quantitatively explored by the fluorescence intensity of the dsDNA stain .
Signal Amplification in Terminal Protection Assay for Sensitive Detection of Small Molecule–Protein Interaction The success in finding terminal protection greatly contributes to the detection of small molecule–protein interaction. One of the greatest advantages of terminal protection assay is the permission of flexible signal amplification while using DNA as a basic component, which furnishes high sensitivity in small molecule–protein interaction assays . Consequently, a variety of nucleic acid signal amplification techniques could be adopted in terminal protection assay for the detection of small molecule–protein interaction. The following sections review some signal amplification techniques that have been combined with terminal protection assay for the detection of small molecule–protein interactions . 3.1. Rolling Circle Amplification Rolling circle amplification (RCA) is an isothermal nucleic acid amplification strategy which forms a long ssDNA containing thousands of repeated sequence complementary to the circular template . Combined with RCA, Chai et al. have reported the development of an ultrasensitive electrochemical sensing method for biotin–SA interaction assay as illustrated in . Two biotin-labeled ssDNA were designed. The short biotin-labeled ssDNA was self-assembled on an Au electrode, which would be digested by Exo I in the absence of the target protein, SA. Another biotin-labeled ssDNA was the ligation probe of RCA and prolonged in advance, which formed a biotin-labeled long-stranded RCA product. Due to its multiplex binding sites, SA bound with the fixed short biotin-labeled ssDNA and the biotin-labeled RCA product near the electrode at the same time. As a result, the long tail of the ssDNA adsorbed a large number of electroactive reporters, hexaamminerethenium (III) chloride (RuHex) via electrostatic interactions to give a highly amplified electrochemical signal. By incorporating the signal amplification technique of RCA, the strategy achieved a detection limit as low as 0.4 pM of SA with high selectivity. 3.2. Hybridization Chain Reaction Hybridization chain reaction (HCR) accomplishes signal amplification via generating a long dsDNA with hundreds of repeated units in a series of hybridization events . Wang et al. demonstrated an electrochemical method for the detection of small molecule–protein interactions by combining terminal protection assay with HCR . In their assays, binding of FR to folate-labeled DNA precluded the degradation of DNA by Exo I. The protected DNA could then be captured by the DNA probe fixed on an electrode, and expose its free part for another ferrocene-labeled DNA probe. This ferrocene-labeled probe further hybridized with another protected DNA through its second region, and then its remaining part formed a G-quadruplex horseradish peroxidase (HRP)-mimicking DNAzyme at one end. The alternant hybridization between folate-labeled DNA and ferrocene-labeled DNA probes formed a supersandwich structure of DNA, which yielded abundant ferrocene and DNAzyme units on the surface of the electrode. Therefore, the detection of FR could be readily achieved using the redox current signal of the electrochemical catalyzed reduction of H 2 O 2 in the presence of ferrocene and hemin/DNAzyme. The advantage of this strategy was that the signal amplification could be accomplished in a single-step, thus affording the method improved simplicity and sensitivity in small molecule–protein interaction assay. 3.3. Nuclease-Assisted Signal Amplification 3.3.1. Nickase-Assisted Signal Amplification Nucleases, both of endonucleases and exonucleases, are useful tools in developing DNA signal amplification methods. Nickase is a kind of endonuclease which needs a specific recognition site in dsDNA but only cleaves one strand of the duplex . If there are enough ssDNA to form new duplex with the uncleaved strand, nickase allows recycling cleavage of DNA duplex. In terminal protection assay, nickase-assisted signal amplification can be readily achieved by introducing a recognition site in the sequence of small molecule-labeled DNA. A sensitive electrochemical method based on nickase-assisted signal amplification has been developed by Li et al. for the detection of FR–folate interaction . In this method, binding FR to the folate-labeled ssDNA prevented the ssDNA from digestion by Exo I. The intact folate-labeled ssDNA then hybridized with the ssDNA immobilized on the surface of electrode to form a nickase site. It thus initiated a cycle of nickase cleavage, ssDNA release and DNA hybridization. Recycling cleavage of the ssDNA immobilized on the electrode weakened the blocking effect against [Fe(CN) 6 ] 3−/4− , accordingly resulting in an increased electrochemical signal. The method was demonstrated to have a detection range of 0.3–20 ng/mL for FR and can be used for the investigation of small molecule–protein pairs with nanomolar dissociation constants. 3.3.2. Exo III-Assisted Signal Amplification In Zhou’s study, they developed a strategy of Exo III-assisted recycling cleavage of fluorescent probe for SA–biotin interaction detection . Based on Exo III-assisted DNA cleavage, they achieved “turn off” and “turn on” strategies simultaneously. In the “turn off” strategy, small molecule biotin was labeled at the 3′-terminus of the antisense strand of the trigger strand that was involved in the cycle of signal amplification and the trigger strand was the 3′ end overhang. When there was no streptavidin, the antisense strand was degraded stepwise from the 3′ to 5′ termini by Exo III, releasing the trigger strand which subsequently hybridized with a molecule beacon (MB) to recover the fluorescence. The formed duplexes of the trigger strand and the opened MB were further degraded by Exo III from the 3′ blunt end of the MB to release the trigger strand and open a new MB, thus rendering a signal generation cycle. In contrast, binding of SA to the biotin-labeled DNA inhibited the degradation of the antisense strand and no trigger strand was released to initiate the signal generation cycle. In the “turn on” strategy, the biotin was attached to the 3′ end of the trigger strand and the two strands were completely complementary to each other. In this case, the two strands could be both digested by Exo III and no signal generated. On the contrary, in the presence of streptavidin, only the antisense strand can be degraded by Exo III. Then the released biotin-labeled trigger strand opened the MB, thus rendering a signal generation cycle. The signal amplification strategy furnished the streptavidin assay an improved low detection limit of 0.8 fM. 3.4. Dual Amplification Strategy Based on Rolling Cycle and Exo III-Assisted Recycling Cleavage Combining the RCA technique with an exonuclease-assisted recycling cleavage of fluorescent probe in terminal protection assay, Chu et al. have realized a dual amplification strategy for small molecule–protein interaction detection . In this method, small molecule-labeled ssDNA was protected from digestion by Exo I via binding to its target protein and initiated a RCA process. Then, the RCA product was probed by fluorescence-quenched probes (taqman probes) at its repeated sequences. The hybrid of the RCA product with the taqman probes, in turn, could be hydrolyzed by Exo III to separate the fluorophore from its quencher. With the hydrolysis of the taqman probes, the RCA product was re-exposed to intact taqman probes. As a result, the Exo III-assisted recycling cleavage of the taqman probes released abundant fluorescent reporters. Due to the advantage of the dual signal amplification, the method may have a great potential for the detection of small molecule–protein pairs with low affinities.
Rolling Circle Amplification Rolling circle amplification (RCA) is an isothermal nucleic acid amplification strategy which forms a long ssDNA containing thousands of repeated sequence complementary to the circular template . Combined with RCA, Chai et al. have reported the development of an ultrasensitive electrochemical sensing method for biotin–SA interaction assay as illustrated in . Two biotin-labeled ssDNA were designed. The short biotin-labeled ssDNA was self-assembled on an Au electrode, which would be digested by Exo I in the absence of the target protein, SA. Another biotin-labeled ssDNA was the ligation probe of RCA and prolonged in advance, which formed a biotin-labeled long-stranded RCA product. Due to its multiplex binding sites, SA bound with the fixed short biotin-labeled ssDNA and the biotin-labeled RCA product near the electrode at the same time. As a result, the long tail of the ssDNA adsorbed a large number of electroactive reporters, hexaamminerethenium (III) chloride (RuHex) via electrostatic interactions to give a highly amplified electrochemical signal. By incorporating the signal amplification technique of RCA, the strategy achieved a detection limit as low as 0.4 pM of SA with high selectivity.
Hybridization Chain Reaction Hybridization chain reaction (HCR) accomplishes signal amplification via generating a long dsDNA with hundreds of repeated units in a series of hybridization events . Wang et al. demonstrated an electrochemical method for the detection of small molecule–protein interactions by combining terminal protection assay with HCR . In their assays, binding of FR to folate-labeled DNA precluded the degradation of DNA by Exo I. The protected DNA could then be captured by the DNA probe fixed on an electrode, and expose its free part for another ferrocene-labeled DNA probe. This ferrocene-labeled probe further hybridized with another protected DNA through its second region, and then its remaining part formed a G-quadruplex horseradish peroxidase (HRP)-mimicking DNAzyme at one end. The alternant hybridization between folate-labeled DNA and ferrocene-labeled DNA probes formed a supersandwich structure of DNA, which yielded abundant ferrocene and DNAzyme units on the surface of the electrode. Therefore, the detection of FR could be readily achieved using the redox current signal of the electrochemical catalyzed reduction of H 2 O 2 in the presence of ferrocene and hemin/DNAzyme. The advantage of this strategy was that the signal amplification could be accomplished in a single-step, thus affording the method improved simplicity and sensitivity in small molecule–protein interaction assay.
Nuclease-Assisted Signal Amplification 3.3.1. Nickase-Assisted Signal Amplification Nucleases, both of endonucleases and exonucleases, are useful tools in developing DNA signal amplification methods. Nickase is a kind of endonuclease which needs a specific recognition site in dsDNA but only cleaves one strand of the duplex . If there are enough ssDNA to form new duplex with the uncleaved strand, nickase allows recycling cleavage of DNA duplex. In terminal protection assay, nickase-assisted signal amplification can be readily achieved by introducing a recognition site in the sequence of small molecule-labeled DNA. A sensitive electrochemical method based on nickase-assisted signal amplification has been developed by Li et al. for the detection of FR–folate interaction . In this method, binding FR to the folate-labeled ssDNA prevented the ssDNA from digestion by Exo I. The intact folate-labeled ssDNA then hybridized with the ssDNA immobilized on the surface of electrode to form a nickase site. It thus initiated a cycle of nickase cleavage, ssDNA release and DNA hybridization. Recycling cleavage of the ssDNA immobilized on the electrode weakened the blocking effect against [Fe(CN) 6 ] 3−/4− , accordingly resulting in an increased electrochemical signal. The method was demonstrated to have a detection range of 0.3–20 ng/mL for FR and can be used for the investigation of small molecule–protein pairs with nanomolar dissociation constants. 3.3.2. Exo III-Assisted Signal Amplification In Zhou’s study, they developed a strategy of Exo III-assisted recycling cleavage of fluorescent probe for SA–biotin interaction detection . Based on Exo III-assisted DNA cleavage, they achieved “turn off” and “turn on” strategies simultaneously. In the “turn off” strategy, small molecule biotin was labeled at the 3′-terminus of the antisense strand of the trigger strand that was involved in the cycle of signal amplification and the trigger strand was the 3′ end overhang. When there was no streptavidin, the antisense strand was degraded stepwise from the 3′ to 5′ termini by Exo III, releasing the trigger strand which subsequently hybridized with a molecule beacon (MB) to recover the fluorescence. The formed duplexes of the trigger strand and the opened MB were further degraded by Exo III from the 3′ blunt end of the MB to release the trigger strand and open a new MB, thus rendering a signal generation cycle. In contrast, binding of SA to the biotin-labeled DNA inhibited the degradation of the antisense strand and no trigger strand was released to initiate the signal generation cycle. In the “turn on” strategy, the biotin was attached to the 3′ end of the trigger strand and the two strands were completely complementary to each other. In this case, the two strands could be both digested by Exo III and no signal generated. On the contrary, in the presence of streptavidin, only the antisense strand can be degraded by Exo III. Then the released biotin-labeled trigger strand opened the MB, thus rendering a signal generation cycle. The signal amplification strategy furnished the streptavidin assay an improved low detection limit of 0.8 fM.
Nickase-Assisted Signal Amplification Nucleases, both of endonucleases and exonucleases, are useful tools in developing DNA signal amplification methods. Nickase is a kind of endonuclease which needs a specific recognition site in dsDNA but only cleaves one strand of the duplex . If there are enough ssDNA to form new duplex with the uncleaved strand, nickase allows recycling cleavage of DNA duplex. In terminal protection assay, nickase-assisted signal amplification can be readily achieved by introducing a recognition site in the sequence of small molecule-labeled DNA. A sensitive electrochemical method based on nickase-assisted signal amplification has been developed by Li et al. for the detection of FR–folate interaction . In this method, binding FR to the folate-labeled ssDNA prevented the ssDNA from digestion by Exo I. The intact folate-labeled ssDNA then hybridized with the ssDNA immobilized on the surface of electrode to form a nickase site. It thus initiated a cycle of nickase cleavage, ssDNA release and DNA hybridization. Recycling cleavage of the ssDNA immobilized on the electrode weakened the blocking effect against [Fe(CN) 6 ] 3−/4− , accordingly resulting in an increased electrochemical signal. The method was demonstrated to have a detection range of 0.3–20 ng/mL for FR and can be used for the investigation of small molecule–protein pairs with nanomolar dissociation constants.
Exo III-Assisted Signal Amplification In Zhou’s study, they developed a strategy of Exo III-assisted recycling cleavage of fluorescent probe for SA–biotin interaction detection . Based on Exo III-assisted DNA cleavage, they achieved “turn off” and “turn on” strategies simultaneously. In the “turn off” strategy, small molecule biotin was labeled at the 3′-terminus of the antisense strand of the trigger strand that was involved in the cycle of signal amplification and the trigger strand was the 3′ end overhang. When there was no streptavidin, the antisense strand was degraded stepwise from the 3′ to 5′ termini by Exo III, releasing the trigger strand which subsequently hybridized with a molecule beacon (MB) to recover the fluorescence. The formed duplexes of the trigger strand and the opened MB were further degraded by Exo III from the 3′ blunt end of the MB to release the trigger strand and open a new MB, thus rendering a signal generation cycle. In contrast, binding of SA to the biotin-labeled DNA inhibited the degradation of the antisense strand and no trigger strand was released to initiate the signal generation cycle. In the “turn on” strategy, the biotin was attached to the 3′ end of the trigger strand and the two strands were completely complementary to each other. In this case, the two strands could be both digested by Exo III and no signal generated. On the contrary, in the presence of streptavidin, only the antisense strand can be degraded by Exo III. Then the released biotin-labeled trigger strand opened the MB, thus rendering a signal generation cycle. The signal amplification strategy furnished the streptavidin assay an improved low detection limit of 0.8 fM.
Dual Amplification Strategy Based on Rolling Cycle and Exo III-Assisted Recycling Cleavage Combining the RCA technique with an exonuclease-assisted recycling cleavage of fluorescent probe in terminal protection assay, Chu et al. have realized a dual amplification strategy for small molecule–protein interaction detection . In this method, small molecule-labeled ssDNA was protected from digestion by Exo I via binding to its target protein and initiated a RCA process. Then, the RCA product was probed by fluorescence-quenched probes (taqman probes) at its repeated sequences. The hybrid of the RCA product with the taqman probes, in turn, could be hydrolyzed by Exo III to separate the fluorophore from its quencher. With the hydrolysis of the taqman probes, the RCA product was re-exposed to intact taqman probes. As a result, the Exo III-assisted recycling cleavage of the taqman probes released abundant fluorescent reporters. Due to the advantage of the dual signal amplification, the method may have a great potential for the detection of small molecule–protein pairs with low affinities.
Other Methods 4.1. Non-Nuclease-Assisted Terminal Protection Assay The terminal protection assay strategies proposed by Jiang and his group utilized the unique properties of exonucleases. Without the participation of exonuclease, Wu et al. constructed a small molecule-linked DNA conversion for screening the small molecule–protein interaction . This non-nuclease-assisted terminal protection assay relied on small molecule–protein binding events-mediated protection of a small molecule-labeled DNA duplex from being immobilized onto a gold substrate. In this method, the DNA duplex was a hybrid of a 3′ thiolated ssDNA and an ssDNA with a small molecule label at 5′ end. Because of its intrinsic self-assembly behavior, the thiolated DNA is easy to be immobilized onto the surface of a gold substrate. With the interaction between the protein and small molecule, the steric hindrance dramatically increased around the small-molecule labeling site which was adjacent to the thiolated site of the other strand. Thereby, it inhibited the self-assembly of the DNA duplex on the gold substrate. Based on such a strategy, Wu et al. have accomplished the quantitative detection of interaction between β-indole acetic acid (IAA) and its antibody on a platform of quartz-crystal-microbalance (QCM). 4.2. DNA/Fok I Transducer The foundation of terminal protection assay of DNA is that binding a protein to small molecule-labeled DNA will dramatically increase the steric hindrance around the binding site and thus inhibit the action of exonucleases. According to the principle of altering work environment of enzymes, Jiang et al. proposed a more generalized concept of the protection assay of DNA and have constructed a DNA/Fok I transducer as a sensitive platform for detection of small molecule–protein interaction . Fok I is a kind of endonuclease which recognizes the sequence 5′-GGATG-3′ of duplex DNA and cleaves the ninth and thirteenth nucleotide located on the downstream of the recognition site . Like other nucleases, Fok I is also highly sensitive to the circumstance around the recognition site. This makes it possible to produce differential signal before and after the protein binding events. As shown in , a small molecule-labeled heteroduplex DNA and Fok I constructed a DNA/Fok I transducer. In the absence of target protein, the small molecule label was not large enough to influence the Fok I activity and the transducer cyclically cleaved the taqman probe, which continued to activate fluorescence. On the contrary, in the presence of target protein, the steric hindrance caused by small molecule–protein interaction inhibited the Fok I activity and no DNA probe could be cleaved. Thereby, only very weak fluorescence signal was observed. In this study, Jiang et al. have demonstrated that the DNA/Fok I transducer could be used to detect the interactions with dissociation constants ranging from subnanomoles to micromoles and had the potential to become a universal, sensitive and selective platform for quantitative assays of small molecule–protein interactions .
Non-Nuclease-Assisted Terminal Protection Assay The terminal protection assay strategies proposed by Jiang and his group utilized the unique properties of exonucleases. Without the participation of exonuclease, Wu et al. constructed a small molecule-linked DNA conversion for screening the small molecule–protein interaction . This non-nuclease-assisted terminal protection assay relied on small molecule–protein binding events-mediated protection of a small molecule-labeled DNA duplex from being immobilized onto a gold substrate. In this method, the DNA duplex was a hybrid of a 3′ thiolated ssDNA and an ssDNA with a small molecule label at 5′ end. Because of its intrinsic self-assembly behavior, the thiolated DNA is easy to be immobilized onto the surface of a gold substrate. With the interaction between the protein and small molecule, the steric hindrance dramatically increased around the small-molecule labeling site which was adjacent to the thiolated site of the other strand. Thereby, it inhibited the self-assembly of the DNA duplex on the gold substrate. Based on such a strategy, Wu et al. have accomplished the quantitative detection of interaction between β-indole acetic acid (IAA) and its antibody on a platform of quartz-crystal-microbalance (QCM).
DNA/Fok I Transducer The foundation of terminal protection assay of DNA is that binding a protein to small molecule-labeled DNA will dramatically increase the steric hindrance around the binding site and thus inhibit the action of exonucleases. According to the principle of altering work environment of enzymes, Jiang et al. proposed a more generalized concept of the protection assay of DNA and have constructed a DNA/Fok I transducer as a sensitive platform for detection of small molecule–protein interaction . Fok I is a kind of endonuclease which recognizes the sequence 5′-GGATG-3′ of duplex DNA and cleaves the ninth and thirteenth nucleotide located on the downstream of the recognition site . Like other nucleases, Fok I is also highly sensitive to the circumstance around the recognition site. This makes it possible to produce differential signal before and after the protein binding events. As shown in , a small molecule-labeled heteroduplex DNA and Fok I constructed a DNA/Fok I transducer. In the absence of target protein, the small molecule label was not large enough to influence the Fok I activity and the transducer cyclically cleaved the taqman probe, which continued to activate fluorescence. On the contrary, in the presence of target protein, the steric hindrance caused by small molecule–protein interaction inhibited the Fok I activity and no DNA probe could be cleaved. Thereby, only very weak fluorescence signal was observed. In this study, Jiang et al. have demonstrated that the DNA/Fok I transducer could be used to detect the interactions with dissociation constants ranging from subnanomoles to micromoles and had the potential to become a universal, sensitive and selective platform for quantitative assays of small molecule–protein interactions .
Conclusions The review traces the recent development in the field of small molecule–protein interaction assays upon the terminal protection of small molecule-labeled DNA. Terminal protection is a generalized discovery demonstrated by Jiang et al. , that small molecule–DNA chimeras could be protected from degradation by various DNA exonucleases. Since terminal protection converts small molecule–protein interaction assays into the detection of DNA of various structures, diverse DNA sequence amplification and detection technologies may be utilized. Combining varying DNA amplification techniques, such as RCA, HCR, or DNA nuclease-assistant recycling amplification, subsequently improves the sensitivity of small molecule–protein interaction assays. Moreover, different signal readout approaches of DNA detection allow the development of highly specific, simple, cost-efficient, rapid, robust and high-throughput methods for small molecule–protein interaction assays. To sum up, terminal protection assay of small molecule-linked DNA serves as a versatile tool for interrogating the interaction between protein and small molecule ligands. With the pursuit of simple, high-throughput and highly sensitive analytical methods, terminal protection assay is expected to hold considerable potential in small molecule–protein interaction investigation and related studies.
|
Biologics for severe uncontrolled chronic rhinosinusitis with nasal polyps: a change management approach. Consensus of the Joint Committee of Italian Society of Otorhinolaryngology on biologics in rhinology | b524c479-8f13-4dca-b188-6376b06909f8 | 9058929 | Otolaryngology[mh] | Chronic rhinosinusitis with nasal polyposis (CRSwNP) is a complex inflammatory disorder including multiple phenotypes. It is a debilitating disease that has a substantial impact on the patient’s quality of life with significant healthcare-related costs. Over the years, management strategies have been focused mainly on symptom relief including intranasal corticosteroid (INCS), saline irrigations and a brief courses of systemic corticosteroids (SCS) with or without antibiotics to manage acute exacerbations that may be associated with significant quantitative changes in inflammatory biomarkers . Optional treatments include macrolide, anti-leukotrienes, anti-histamine and aspirin desensitisation only for patients with NSAIDs Exacerbated Respiratory Disease (N-ERD). If maximal medical therapy does not lead to adequate control of symptoms, endoscopic sinus surgery (ESS) is considered to remove inflammatory tissue alleviating nasal obstruction and expediting delivery of topical therapies. Surgery is not curative but is crucial in improving access for future topical medical therapy . Unfortunately, a significant percentage of patients do not find relief from current standard of care medications and surgery having residual symptoms or recurrence of polyposis even cycles of systemic corticosteroids and/or surgery. “Difficult-to- treat” patients are considered as those in whom an acceptable level of control is not achieved despite appropriate medical and surgical treatment. For these patients, the only chance in recent years was to repeat multiple ESS with an increasingly high risk of perioperative complications and a progressively shorter time of symptom control between surgeries , . The success in targeting specific immunologic mediators in asthma with biologics has led to an interest in the use of a similar therapeutic approach for CRSwNP . Several trials have shown subjective and objective improvements in patients with CRSwNP with or without asthma as well as a good safety profile. For this reason, biologic agents have been proposed as an adjunct treatment for CRSwNP patients and in the next months, the therapeutic opportunity may change quickly because several monoclonal antibodies (MAbs) will be available within a short time in many countries for uncontrolled severe CRSwNP. As some biologics have received regulatory approval in Italy, the Joint Committee of Italian Society of Otorhinolaryngology on biologics in rhinology became interested in how to incorporate these new agents into the treatment paradigm for CRSwNP. In this report we summarise the substantial literature evidence about the most promising biologics in CRSwNP presenting a consensus on the most critical issues that emerged from the workshops of the commission in 2020. We aimed to provide consensus on strategic issues to offer the best care for patients with severe uncontrolled CRSwNP. It is hoped that this report will be used by researchers and clinicians who will address the incorporation of these new therapeutic modalities into the CRSwNP treatment algorithm.
We used the RAND indication for standard Delphi methodology with a multi-step process. Specific statements were formulated basing on an extensive review of existing literature about the use of biologics in CRSwNP. Manuscript were screened primarily by Ovid Medline and EMBASE and from other sources (PubMed Central, Cochrane review, Web of Science, and Google Scholar). Our expert panel undertook a modified 2-round Delphi process and members were asked to independently vote on statements, which were formulated based on strategic discussion during 2020. We used a 4-point Likert scale (‘strongly agree’, ‘agree’, ‘disagree’, ‘strongly disagree’). Free text comments were encouraged if greater context was required or if the statements were ambiguous. Consensus was defined as > 70% of participants agreeing/strongly agreeing. The document was written and submitted for review and approval to all the members of the committee. All changes made were discussed and refined until unanimous approval was obtained. Statements receiving consensus are summarised at the end of each paragraph.
Management based on phenotyping and endotyping of the disease Chronic rhinosinusitis (CRS) comprises a spectrum of conditions with distinct clinical presentations and pathogenic mechanisms , , . For years it has been adopted clinical dichotomisation of CRS without NP (CRSsNP) and CRSwNP assuming that it was determined by predominantly T-helper 1 cells in the former and T-helper 2 cells in the latter. However, further research demonstrated that immunologic profile is much more complex, demonstrating that there is some overlap and endotypes may coexist in the same patient. In fact, non-eosinophilic inflammation dominated by Th1/Th17 pathways may be associated with CRSwNP and CRSsNP patients may express a Type 2 cytokine profile . Because studies on endotyping provided full insight into the underlying cellular and molecular inflammatory mechanisms associated with CRS , the EPOS 2020 group came into the decision to change the management approach to CRS. The authors recognised the importance to move away from differentiating management basing on phenotypical classification between CRSsNP and CRSwNP towards a new classification based on the disease being localised (often unilateral) or diffuse (always bilateral). Both these groups are further divided basing the endotype into type 2 or non-type 2 disease. In case of more endotypes coexisting in the same patient, the authors suggested to identify the dominant one in order to establish the best personalised therapeutic approach. Approximately 80% of diffuse CRS in Western countries are characterised by a dominant Type 2 response driven mainly by key Type 2 cytokines (IL4, IL5, IL13 etc) and circulating/local IgE, with eosinophilia as a typical signature . Currently, both an allergic (IgE-mediated) and non-allergic pathway are understood to play a role in the pathophysiology of the underlying eosinophilia, representing the ideal immune profile of severe CRSwNP potentially candidates for biologics. For this reason, recent position papers , suggested to provide confirmatory evidence of Type 2 inflammation in these patients using systemic eosinophil and IgE count. It has also demonstrated that the amount of local eosinophilic infiltration and the overall intensity of the inflammatory response are closely related to the prognosis and severity of disease . For this reason, development of institutional protocols for sampling, storing and processing sino-nasal mucosa samples, sometimes in close collaboration with histopathologists is increasing . Actually, authors discuss on the best procedure to define local inflammation and the most common used techniques include: nasal biopsy, nasal brushing or scraping (nasal cytology), nasal lavage fluid and nasal suctioning of secretions . Authors have suggested that diagnosis of eosinophilic CRS requires quantification of the numbers of eosinophils, i.e. number/high powered field (hpf), which may vary in the literature (8-12/hpf) and which should be achieved by analysing at least three of the most dense collections of eosinophils (very rich fields) in the samples counted at hpf (~400x). The EPOS steering group specified that the minimal cut-off to achieve evidence of Type 2 inflammation on tissue samples was eosinophils > 10/hpf. The cut-offs for the other procedures had not be established and specific studies are required to determine it. Other biomarkers used at the moment to define type 2 disease are blood eosinophilia, IgE levels and, in some specialised centers, periostin. EPOS group suggests as specific cut offs for this biomarker: > 250/microliter for blood eosinophilia and > 100 kU/l for total IgE. Other biomarkers are currently under investigation and may provide further guidance in the future. The combination of phenotyping (responsiveness to different treatments, including systemic or intranasal corticosteroids, surgical interventions, comorbid asthma, N-ERD, etc.) and endotyping [blood/local eosinophils or neutrophils, TH-cell populations, levels of cytokines (IL-4, IL-5, or IL-13, etc.), IgE either in blood or tissue, anti-staphylococcal IgE, periostin and other future potential biomarkers] are at the moment the best way to predict the likely natural course of disease and prognosis in terms of disease control after surgery. Based on this concept, many authors , , have tried to identify the best way to predict the natural history of the disease, facilitating counseling the patient on the expected outcome of the surgery and helping to establish the best postoperative medical management that can offer the best chance to control patient’s symptoms. Finally, identification of endotypes is essential for individualisation of therapy. Adequate endotyping and phenotyping of the disease should be refined in an additional work-up in all severe uncontrolled CRSwNP patients. ENT physicians involved in prescription of biologics in rhinology centers should standardise the diagnostic work-up for severe uncontrolled CRSwNP strengthening multidisciplinary cooperation to define endotype of the disease and eligibility for biologics. Rhinologic centers should develop institutional protocols for determination of type 2 inflammation associated with severe uncontrolled CRSwNP. Clinical predictors of treatment outcomes are useful to foresee the likely natural course of disease and facilitate counseling of patients on expected outcomes of standard of care treatments. The new age of biologics in CRS: from scientific evidence to approval of new treatment options Monoclonal antibodies have been demonstrated to be very useful in the management of chronic eosinophilic diseases such as asthma and atopic dermatitis; the experience in these fields encouraged researchers to investigate efficacy of these drugs in CRSwNP. Proof-of- concept studies were performed mainly in patients with severe asthma and nasal polyps, generating promising results and building upon successful phase 3 studies . The pathophysiology of CRSwNP includes eosinophilia, T-helper cell 2 cytokines and IgE formation, and for this reason three main strategies may be undertaken with monoclonal antibodies: anti-IL-4/IL-13 signaling (dupilumab), anti-IL-5 pathways (mepolizumab, benralizumab) and anti-IgE antibodies (omalizumab). In we summarise mechanism of action, possible side effects, dose, and administration modalities of most promising biologics in the treatment of CRSwNP. Furthermore, we reviewed literature evidence about their significative steps in the approval process ( ). Anti-IL-4/IL-13: Dupilumab Dupilumab is a fully human monoclonal antibody targeting the α -chain subunit of IL-4 receptors (Type 1 and type 2 IL-4R α ) and inhibiting IL-4/IL-13 signaling . Literature data have demonstrated that the dual inhibition of IL-4 and IL-13 signaling may represent an important strategy for the treatment of type 2 CRSwNP. Bachert et al. in a phase II, randomised, double-blind, placebo-controlled study evaluated dupilumab in patients with CRSwNP refractory to INCS. Patients (n = 60) were randomised to 2 weekly subcutaneous dupilumab injections or placebo, and 51 patients completed the study. The group treated with dupilumab had a significant reduction in polyp size (primary endpoint), which was clinically observable from the 4th week of treatment. Later, the SINUS-24 and SINUS-52 phase 3 studies demonstrated the efficacy and safety of subcutaneous dupilumab 300 mg administered every 2 weeks versus placebo in severe CRSwNP not controlled with standard of care (INC, previous SCS and/or surgery). Patients obtained significant improvements in all primary and secondary endpoints at week 24 and 52. A significant improvement was observed in treated patients compared to placebo in terms of nasal congestion/obstruction severity, nasal polyps score (NPS), sinus opacification and loss of smell. For the two primary endpoints, NPS and NCS, significant improvement was observed as early as week 4 of treatment. For UPSIT score, significant improvement was observed at week 2 of observation, with continued improvement evident up to the end of treatment in both studies for all endpoints. For loss of smell, 62% of patients treated with dupilumab changed their smell status from anosmic to non-anosmic. Lastly, dupilumab treatment resulted in a significant reduction of SCS use and the need for revision surgery compared to placebo. Supporting dupilumab’s mechanism of action, analyses of biomarkers in patients treated with dupilumab in SINUS-52 showed a consistent decrease in concentrations of serum total IgE, periostin, TARC and plasma eotaxin-3 at weeks 24 and 52 and in concentrations of ECP, total IgE, eotaxin-3, and IL-5 in nasal secretions at week 24. Furthermore, in SINUS-24, the suspension of dupilumab vs placebo at week 24 led to loss of efficacy on all endpoints observed up to week 48. Finally, literature data supports the benefits of adding dupilumab to daily standard of care in patients with CRSwNP as a novel approach in treating the entire spectrum of clinical manifestations of the disease, as well as the frequently associated type 2 lower airway comorbidities . Dupilumab was the first biologic approved by the Food and Drug Administration (FDA) on June 26 th , 2019 to treat in adults with CRSwNP not adequately controlled. The European Medicines Agency (EMA) released a favourable opinion on dupilumab on October 26th, 2019 as add-on therapy with INCS for the treatment of adults with severe CRSwNP for whom therapy with systemic cosrticosteroids and/or surgery do not provide adequate disease control. In Italy, dupilumab was approved by the Italian Agency of Drugs (AIFA) on December 9 th , 2020 for adult patients with severe CRSwNP (assessed by an NPS score ≥ 5 or a SNOT-22 score ≥ 50) for whom therapy with SCS and/or surgery do not provide adequate disease control, in addition to background therapy with INCS. Omalizumab (anti-IgE antibody) Omalizumab is the longest-lived monoclonal antibody approved since 2003 for the treatment of moderate to severe persistent allergic asthma in more than 90 countries . It was designed to treat IgE-mediated disease by reducing the concentration of free IgE in blood and tissue . Given multiple potential mechanisms by which omalizumab may limit Type 2 inflammation it was investigated not only in asthma but also in CRSwNP. Phase III trials (POLYP 1 and POLYP 2) were conducted in parallel to evaluate the efficacy and safety of omalizumab in adults with severe uncontrolled CRSwNP refractory to treatment with INCS. The trials compared the effects of omalizumab (75-600 mg s.c. every 2 or 4 weeks, adjusted according to pre-treatment serum IgE and body weight) to placebo in patients with severe CRSwNP not controlled with standard of care background therapy by INCS. Both POLYP-1 (n = 138) and POLYP-2 (n = 127) met their co-primary endpoints: omalizumab-treated patients achieved statistically significant improvements in mean NPS and daily NCS at Week 24 versus placebo. Moreover, the improvements were observed as early as week 4 in both studies, demonstrating a rapid effect and maintained in time. Key secondary endpoints were also met including SNOT-22, total nasal symptom score (TNSS), sense of smell (assessed by UPSIT), posterior and anterior rhinorrhea scores for post-nasal drip and runny nose. Improvements above placebo were observed for most secondary endpoints as early as Week 4 (Week 8 for UPSIT) and were maintained over the 24-week treatment period. In addition, reduced need for surgery by Week 24 (NPS of ≤ 4 and MCID improvement in SNOT-22) was observed in 19% of omalizumab-treated patients versus 3% of placebo-treated patients in POLYP-1 and 17% versus 3% in POLYP-2 . An open-label extension study for participants in POLYP-1 and POLYP-2 studies was conducted to evaluate the safety, efficacy and durability of response of omalizumab in adult patients with CRSwNP and inadequate responders to INCS. Patients who completed either POLYP-1 or POLYP-2 were eligible for this study (n = 249). All patients received treatment with omalizumab for 28 weeks, followed by a 24-week period off treatment to assess the recurrence of nasal polyposis. The extension study results show that omalizumab treated patients improved in term of NPS and SNOT-22 scores. On the other hand, when ceasing the treatment NPS, NCS and SNOT-22 progressively worsened, although they never returned to pre-treatment levels. Therefore, long-term benefits of the therapy have been demonstrated. Omalizumab was generally well tolerated with overall rates of adverse events (AE) comparable to those observed in previous Phase III trials . No new or unexpected AEs were observed. On November 30 th , 2019, the FDA approved omalizumab for the treatment of CRSwNP. Furthermore, the EMA gave a favourable opinion of omalizumab on July 7 th , 2020 in Europe. Biologics Targeting IL-5 pathways (Mepolizumab, Benralizumab) Mepolizumab The clinical development programme of mepolizumab in CRWwNP was composed of two phase 2 placebo-controlled studies that evaluated intravenous mepolizumab 750 mg in patients with severe nasal polyps , , and by the phase 3 SYNAPSE study which investigated the efficacy and safety of subcutaneous mepolizumab 100 mg administered via pre-filled syringe in adult CRWwNP . Bachert et al. in the phase II study evaluated intravenous mepolizumab 750 mg every 4 weeks in 105 patients with severe bilateral CRSwNP requiring surgery according to predefined criteria (NPS > 3 or more in 1 nostril and a VAS > 7). The authors demonstrated that mepolizumab led to a significant reduction in the need for surgery and a significant improvement of symptoms vs placebo. Gevaert et al. evaluated intravenous mepolizumab 750 mg every 4 weeks in 30 adults with severe uncontrolled CRSwNP. Mean total nasal polyp score was significantly improved in 60% of mepolizumab-treated patients compared to 10% of the placebo group. Howarth et al. described results of a post hoc analysis of the MUSCA study and a meta-analysis of MUSCA and MENSA ; their combined objective was to determine the change in HRQOL in mepolizumab-treated patients with severe eosinophilic asthma (SEA) with or without NP. For the MUSCA post hoc analysis, 422 patients completed the SNOT-22 questionnaire at baseline and were included. Overall, 19% of patients (n = 80) had NP; in these patients mepolizumab and placebo significantly reduced the mean SNOT-22 from baseline to week 24. For the meta-analysis of MENSA/MUSCA, 166 of 936 patients (18%) had NP at screening. Patients with SEA and concomitant NP had a phenotype that showed greater benefit with mepolizumab compared with patients with SEA in the absence of NP. The phase 3 SYNAPSE study was a 52-week, randomised, double-blind, placebo-controlled, parallel group study of subcutaneous mepolizumab 100 mg in 407 adult patients with highly symptomatic CRSwNP uncontrolled by previous surgery and treated with INCS. Eligible patients had at least 1 prior surgery in the past 10 years, recurrent nasal polyps despite treatment with standard of care and in need of nasal polyp surgery (overall VAS > 7 and an NPS of at least 5 with a minimum score of 2 in each side). The results were presented firstly at the congress of the European Respiratory Society, September 7-9, 2020 . Mepolizumab 100 mg administered subcutaneously demonstrated significant improvement in terms of size of nasal polyps and nasal obstruction at week 52 compared with placebo. Based on these data, in October 2020, EMA accepted regulatory submissions seeking approval for the use mepolizumab in CRWwNP. Mepolizumab is currently not indicated for the treatment of CRSwNP. Benralizumab Benralizumab is a humanised monoclonal antibody that binds to the alpha subunit of the IL-5 receptor (IL-5R or CD125) which is expressed on different cells like eosinophils, basophils and type-2 innate lymphoid cells (ILC2). The mechanism of action of benralizumab, different from other monoclonal antibodies binding IL-5, is not limited to interference with IL-5 inflammatory pathways. Indeed, benralizumab is able to induce an antibody-dependent cellular cytotoxicity (ADCC) by binding to the Fc γ RIII α receptor expressed on natural killer (NK) cells. This second mechanism of action produces a direct, rapid and nearly complete eosinophil depletion both in peripheral blood and bronchial tissue . The Phase III studies, SIROCCO and CALIMA , , demonstrated the efficacy and safety of benralizumab in significantly reducing annualised exacerbations rates, improving lung function and disease control vs placebo as add-on therapy to high-dosage ICS/LABA in patients with SEA and blood eosinophil counts ≥ 300 cells/microliter. A growing body of evidence suggests that benralizumab may exert a rapid and effective therapeutic action in patients with SEA and concomitant relapsing nasal polyposis . Canonica et al. presented the results of a sub-study of ANDHI phase III-b trial at EAACI congress in 2020, involving 153 patients with SEA and CRSwNP as comorbidity, demonstrating the efficacy of benralizumab in improving SNOT-22 scores. Clinically relevant improvements in CRSwNP symptoms were observed following the first dose and maintained over time. Real world studies and case reports have confirmed the efficacy and safety of benralizumab in this population in clinical practice. Lombardo et al. assessed a cohort of 10 SEA patients with CRSwNP treated with benralizumab, demonstrating significant reduction of endoscopic Nasal Polyp Score (NPS), Lund-Mackay Score and SNOT-22 after 24 weeks. Bagnasco et al. in a real-world evaluation in 34 patients with SEA and CRSwNP, confirmed the effectiveness of benralizumab on SNOT-22 reduction, with 8/26 patients (31%) recovering from anosmia after 6 months of treatment. In a phase II randomised, double-blind, placebo-controlled 20-week trial , benralizumab led to significant improvement in endoscopic NPS, CT score, SNOT-22 and UPSIT score vs baseline in severe CRSwNP patients refractory to standard therapies with at least one previous polypectomy. These results suggested that benralizumab, which targets eosinophils directly, may have a role in the treatment of patients with severe uncontrolled CRSwNP. Currently, a Phase III development programme which includes the completed OSTRO study and the ongoing ORCHID trial is assessing the efficacy and safety of benralizumab in patients with severe CRSwNP with or without asthma. On September 2020, a press release revealed that the OSTRO study Benralizumab met both its co-primary endpoints of reduced nasal polyp size and nasal congestion score (NCS) vs placebo as add-on therapy to standard of care in patients with severe bilateral nasal polyposis. Benralizumab for use in CRSwNP is expected to be approved in the next few years. Basing on the data of phase 3 studies, biologics approved and oncoming in the next future should be considered as add-on therapy to local corticosteroids when control of the disease is not achieved even after oral corticosteroids and/or surgery. All the members of the committee agree that biologics are recommended when Type 2 inflammation is highly likely to be the dominant endotype of severe uncontrolled CRSwNP. ENT physicians involved in the prescriptions of biologics should have a clear understanding not only about sino- nasal inflammatory patterns driving diffuse CRSwNP, but also about the mechanism of action, possible side effects, dose and administration modalities of biologics. Recommendations for biologics in uncontrolled severe CRSwNP Several trials have investigated the efficacy of biologics in the treatment of CRSwNP with encouraging results. The approval of some biologics by the FDA in the treatment of severe uncontrolled CRSwNP even without asthma has stimulated discussion in the medical community, expecting a quick entry in the market not only for dupilumab, but also for other monoclonal antibodies. For this reason, recent guidelines , gave full consideration about selection criteria of the ideal candidate for biologics and their place in current care pathways. In 2019, the EUFOREA team suggested for the first time five criteria as crucial to select CRSwNP patients who are eligible for biologics. In February 2020, EPOS guidelines further defined these criteria introducing specific cut-offs: evidence of type 2 disease (tissue eosinophils ≥ 10/hpf or blood eosinophils ≥ 250/microliter or total IgE ≥ 100), need for at least two courses of SCS per year or long term (> 3 months) low dose steroids or contraindication to systemic steroids, significantly impaired quality of life (SNOT-22 ≥ 40), anosmic on smell test and/or comorbid asthma needing regular inhaled corticorsteroid. EPOS 2020 concluded that biologics are indicated in patients with bilateral nasal polyps, who had sinus surgery or were not fit for surgery and who had three of the listed criteria. The authors were involved in an extensive discussion of whether there was a role for biologics in patients without previous sinus surgery accepting that it was possible in exceptional circumstances. Criteria established by current guidelines , refers use of biologics in patients with severe and uncontrolled CRSwNP bringing to light the increasing necessity of identifying subgroups of patients who are eligible for biologics and of a clear definition of severe uncontrolled CRSwNP. The concept of disease control has been a major critical point to optimise CRS management and was introduced for the first time at EPOS 2012 combining the following parameters: control of the four major sino-nasal symptoms (nasal blockage, rhinorrhoea/postnasal drip, facial pain/pressure, smell), sleep disturbance and/or fatigue, endoscopic aspect of nasal mucosa and medical intake. EPOS2020 criteria specified that the 4 major symptoms should be specifically related to CRS and not to other reasons . EPOS 2020 assumed as “difficult-to- treat” those in whom an acceptable level of control was not achieved despite appropriate surgery, INCS, and up to 2 brief courses of antibiotics or SCS in the last year, or long term (> 3 months) low dose steroids. The EPOS 2020 panel defined “short” courses of SCS as at least 7-21 days. In the latest EUFOREA 2020 , “uncontrolled CRSwNP” was defined as “persistent or recurring despite long-term INCS and having received at least one course of SCS in the preceding 2 years (or having a medical contraindication or intolerance to SCS) and/or previous sinonasal surgery (unless having a medical contraindication or being unwilling to undergo surgery)”. The EUFOREA group suggested that a short course of oral corticosteroids should be of a minimum of 5 days at a dose of 0.5-1 mg/kg/day or more. In this last definition, the need for corticosteroids was lowered based on evaluation of baseline criteria of subjects included in the Phase 3 studies. The EUFOREA group further confirmed that long term low does SCS are not recommended for CRSwNP. This panel believes that a specific discussion should be opened by the medical community on the right dose of SCS to consider as maximal per year. Given the considerable variability of ENT physicians in prescribing SCS in terms of daily dose and length of short courses, we believe that it may be more appropriate to refer to the yearly cumulative dose in the last year as for asthma patients. Bourdin et al. in fact suggested that “a yearly cumulative OCS dose above 1 gram should be considered unacceptable in severe asthma and should make the case for referral”. The concept of severity of disease over the years has been mainly based on the impact of disease on quality of life and its local extension. Because CRSwNP has a wide variability of presentation and the severity may vary significantly between individuals, several authors investigated how to measure it and its definition is becoming increasingly important. Validated QOL markers have been utilised to identify eligible CRSwNP patients for Phase 3 studies with biologics, with VAS and SNOT-22 being the most commonly used; for this reason, they are currently adopted to define severe CRSwNP , . Several nasal polyp endoscopic scoring systems have been described over the years , until a total NPS was recently developed and standardised . It has served as a co-primary outcome in clinical trials of biologics, the results are reproducible and responsive to change in severe disease and it is the most common used to evaluate the size of nasal polyps. Equally, the Lund-Mackay radiological score allows reliable assessment of the extent of disease, and like endoscopy is easily repeatable . Evaluation of olfaction is always important to define severity of the disease. UPSIT is the standard clinical test used in United States, whereas the Sniffin’ Sticks in Europe , . Both have high test-retest reliability, normative values by age and sex, and are widely used in research and clinical practice. Nasal airflow may easily be measured by peak nasal inspiratory flow (PNIF) that is an objective measure of airflow and closely correlated with nasal airway resistance. PNIF is simple to obtain, and the devices are inexpensive and can be used for repeated measurements . The EPOS steering group identified as cut offs for severe CRSwNP a VAS > 7, SNOT-22 > 40 and NPS > 5. Furthermore, the EPOS guidelines suggested that also olfaction evaluation was an important parameter to take into consideration suggesting as cut-offs the specific ones for the test used and indicating a picture of anosmia. Recently, the expert EUFOREA panel lowered this parameter as follows: SNOT-22 > 35, loss of smell score (0-3) > 2 points or VAS ≥ 5 and NPS ≥ 4. The members of this committee believe that particular attention should be paid about cut offs of severe CRSwNP. Some concerns have been raised about this new proposed endoscopic score cut-off (NPS > 4) which seems to more properly reflect a moderate picture. Considering the fact that quality of life parameters were also lowered, we believe that future considerations should be made about this topic. The members of the committee agree that given the importance of measuring the severity of the disease, particular attention should be paid to this aspect. The ENT should always be familiar with the most common severity indicators that should routinely be adopted in clinical practice. Future debates should be opened about the maximal yearly SCS dose and specific cut offs for the definition of severe uncontrolled CRwNP. Multidisciplinary approach The recent scientific evidence clearly underlines the link between Type 2 diseases, leading to implement multidisciplinary evaluation in Type 2 inflammatory conditions. CRS healthcare often requires support from other specialists especially in severe cases. The collaboration with an allergologist, pneumologist, immunologist and rheumatologist is crucial to define endotype of the disease and coexisting Type 2 comorbidities such as atopic dermatitis, eosinophilic esophagitis or gastroenteritis, N-ERD, allergic fungal rhinosinusitis, Churg Struss Syndrome etc. , . In the context of a multidisciplinary approach, the central role of the ENT in the management of CRSwNP should be underlined. The ENT has a crucial role firstly in the confirmation of the disease, in evaluating previous surgical treatment and measuring severity of the disease. Endoscopy should be considered a mainstay in the diagnosis of CRSwNP to perform an adequate phenotyping, accurate staging of the disease and adequate differential diagnosis. It should be noted that the possible coexistence of inverted papilloma and diffuse CRS with nasal polyps should be always excluded even if rare . CT scan without endoscopy is not sufficient to confirm the diagnosis of CRSwNP. Particular attention should be paid in the definition of the severity of comorbidities. Biologics for CRSwNP and concomitant severe asthma should be mainly managed by asthma specialists, while on the other hand for patients with severe uncontrolled CRSwNP without asthma/mild moderate asthma the role of the ENT specialist should be central. Close collaboration is always recommended to manage comorbid patients. Surgery and biologics The commission believes that the role of surgery should not be underestimated, but rather that its role should be reconsidered in the light of new therapeutic opportunities. ESS usually leads to a very quick relief of symptoms and in particular of nasal obstruction, and it further improves control of the disease obtained by long term local corticosteroids. Sinuses are, in fact, better accessible to local treatments after surgery increasing disease control by long term use of INCS (in 60%-70% of cases, disease does not recur within 5 years) , . For this reason, it is very important to distinguish between first-time and revision surgery. Another crucial factor that may influence the decision-making algorithm is the coexistence of other Type 2 comorbidities and in particular asthma (the one most associated with CRSwNP). The severity of comorbidities should be established because different scenarios may be faced that need to be assessed separately , . Patients with severe uncontrolled CRSwNP mainly managed by medical treatment and never treated by surgery with or without mild moderate asthma If a patient has never undergone surgery, ESS should be taken into consideration because it improves control of the disease by INCS spreading their distribution to all sinonasal mucosa. Based on this assumption one could infer that if patients never received surgery probably control by INCS may not be fully achieved , . The members of the commission believe that in a patient with uncontrolled severe CRSwNP treated mainly with long term INCS and brief cycles of SCS and who never received surgery, ESS should be taken into consideration as first line treatment, although the following circumstances should be considered as limitations: contraindications to surgery because of patient’s general condition (severe cardiopathy, severe haemorrhagic risk, high risks for general anaesthesia etc.); patients refusing treatment by surgery; relevant side effects using INCS and SCS; patient preferences after adequate counseling on all therapeutic options. Finally, one last matter should be covered in the near future. Taking into consideration that some authors , , have demonstrated that disease control by ESS plus long-term local corticosteroids is very difficult to achieve in the presence of negative predictors of surgical outcomes (asthma, allergy, blood eosinophilia, ASA triad, high load local inflammation, specific preoperative inflammatory patterns) , some speculate that in this subgroup of patients biologics should be taken into consideration even as first line treatment. Nevertheless, at the moment, there is insufficient literature evidence to support this statement and specific trials should be properly designed to verify this hypothesis. Patients with severe CRSwNP uncontrolled after medical and surgical treatments with or without mild moderate asthma This may be a different scenario if CRSwNP patients already underwent at least one previous surgery. In this situation, the ENT specialist has a central role in clarifying if surgery was appropriate or not by a careful evaluation of CT and endoscopic findings. It is very important to consider the surgical technique used in the previous treatments. Unfortunately, literature data about rate of success surgical management of CRSwNP varies significantly mainly because authors have not differentiated patients based on their phenotypes and because they adopted different criteria to define recurrence and disease control . In addition, revision ESS rates have changed over the last decade tailoring the extent of surgery and optimising adjuvant post-operative therapy . Recurrence after a simple polypectomy should be understood in a different way compared to a patient who underwent a more extended approach. It should be careful evaluated if surgery was commensurate to the severity of the phenotype. In case of uncontrolled disease after previous appropriate surgery and good adherence to INCS the shift to a biologic should be advised. On the other hand, especially in cases in which a simple polypectomy was performed and the ethmoidal labyrinth was not adequately opened, the possibility of revision surgery should be discussed with the patient. The commission agreed that in this situation the ENT specialist should have a clear idea of which additional surgical goals may be achieved to improve access to sinus cavities including, for example, a partial middle turbinectomy if not performed previously. Another important factor to take into consideration is the timing of recurrence and control of symptoms that patients experienced over the years after surgery. Recently, some authors have demonstrated that patients presenting with a symptomatic recurrence within 3 years of surgery have a high risk of treatment failure, defined as the need for further surgery. Surgeons should distinguish between revision surgery that is required within a short period from the first procedure and a revision that is required after several years with good control of the disease. In these cases, we believe that the patient should be involved into the decision to repeat surgery or to shift towards treatment with biologics. If patients experience a long period of symptoms controlled by surgery and INCS, a revision surgery can be discussed with the patient. In this context, the presence of clinical predictors of poor surgical outcomes may help the patient and the surgeon towards the choice of biologics. Other factors may influence the choice such age of the patient and his/her preferences. In case of patients who underwent multiple surgeries with a severe impact on quality of life and who experienced a short interval of symptoms control between interventions, the use of biologic is recommended whatever the endoscopic nasal polyps score at the moment of the evaluation. Similarly, in patients already treated by surgery and who reported major complications after ESS, the shift to biologics is recommended. This committee believes that adequate counselling is always recommended in order to discuss all the alternative treatments and possibilities with the patient based on control and severity of disease. Based on the new personalised medicine requirement, patients should participate in the decision to start with a specific treatment. We believe that surgery still plays an important role not only in order to optimise control of the disease, but also for the dynamics between forces that range from international recommendations and payer policies to patient and physician preferences. Discussion should be opened about the possibility to use biologics as first line with surgery in case of very high polyps score to offer a better starting point to patients, even if there is insufficient evidence to support this hypothesis considering that there are no data comparing surgery in combination with biologics. In addition, the following recommendation of EUFOREA 2021 should be taken into consideration: “A fixed combination plan with surgery and biologic treatment starting in parallel or within a short time of one another is not advised, as the response of the individual patient to surgery or the biologic would be impossible to evaluate” . Patients with severe uncontrolled CRSwNP and comorbid uncontrolled severe asthma A proportion of patients with severe uncontrolled CRSwNP may also have a coexisting, highly disabling Type 2 disease such as severe asthma . In this situation, multidisciplinary discussion with an allergologist and pneumologist is essential and treatment with biologics should be mainly managed by them. In these patients, surgery may offer a better starting point to achieve quick relief of sino-nasal symptoms and asthma control as soon as possible, even if surgery should be delayed while verifying the efficacy of biologics on sino-nasal symptoms and reducing the nasal polyp score. Close cooperation is recommended during treatment to evaluate both efficacy on asthma and CRwNP. Surgery or shift to another biologic may be indicated if poor control of CRSwNP is observed after 4-6 months of treatment with biologics. The commission agrees that if severe asthma co-exists close cooperation with a pneumologist and allergologist is highly recommended to evaluate in a multidisciplinary fashion the best way forward in term of indications and selection of biologics. Criteria to evaluate response to biologics EUFOREA expert panel on 2019 first described criteria to evaluate response to biologics and specifically: reduced nasal polyps size, reduced need for SCS, improved quality of life, improved sense of smell and reduced impact of comorbidities. The same criteria were adopted by EPOS 2020 . Initially authors agreed that the first evaluation should be set at 4 months to consider an early stopping point if treatment response is lacking, due to the high cost of these medications. More recently, the EUFOREA expert panel in 2020 prolonged the first evaluation to 6 months of treatment and specified cut-offs for each criterion. The authors specified that the treatment should be followed when a clear change for at least one of the following criteria have been met: smell score increase > 0.5, NCS decrease > 0.5, NPS decrease by 1 point, SNOT-22 reduction > 8.9; VAS reduction > 2 cm. In addition, the authors recommended to discuss improvement with the patient. If patients do not accept improvement, a salvage treatment by SCS or surgery should be considered. A proportion of patients, in fact, might need surgery that the authors defined as “salvage surgery under biologic protection”, although there is limited data about long term benefit of this kind of approach. Otherwise, if the patients accept improvement even in case of a minimal response the treatment should be prolonged until 12 months when efficacy should be re-evaluated, and all the following definitions should be satisfied to follow treatment: NPS < 4; NCS < 2; VAS < 5; SNOT-22 < 30. If the criteria are not met, surgery should be performed, or a different biologic should be considered. The EUFOREA group tried to standardise the evaluation of biologic efficacy and the decision to adopt based on the results and the patient’s comfort and preferences. We believe that future considerations will probably be required to confirm these criteria or to confirm more or less stringent indications. Real-life experience will be crucial to support this shared decision-making model. The commission believes that evaluation to consider response to biologics is extremely important. All the members of the committee agree that the rhinology centres involved in the prescription of the biologics should organise the right setting for proper follow-up and assessment of response to biologics. Prediction of response to biologics and biomarkers Biomarkers can serve as predictors of which patients will respond best to therapy and as outcome parameters during treatment in order to establish efficacy of treatment. Actually, prediction of response to biologics in an individual patient is not possible. In fact, we currently lack reliable clinical biomarkers to differentiate among CRSwNP endotypes that may differ in their response to specific biologics . In this context, specific biomarkers should be investigated; to be clinically useful, as a predictor of the response to treatment, a biomarker must be highly predictive; it is also possible that clusters of biomarkers may be able to attain high levels of predictability, but extensive work is required to advance this field especially to be ready in the near future when more biologics will be available for severe uncontrolled CRSwNP. There is no experience on the best choice of a first biologic or a second, and there are no known limitations for blood or serum parameters for CRSwNP. Finally, no head-to-head comparisons between biologics have been performed. Future work on biomarkers may yield better tests for selecting the first drug to start with. Prediction of response to biologics basing on validated biomarkers actually is not possible. Costs of biologics in CRSwNP Although multiple studies have confirmed the efficacy of biologics for treatment of CRSwNP, very limited data are currently available about cost analyses of biologics compared with the current standard of care. Brown et al. critically looked at the efficacy and costs of biologic therapy for CRSwNP. They found few studies addressing this topic, reporting a more robust literature in asthma compared to CRwNP. They concluded that cost-efficacy studies are ambivalent when evaluating biologics. In fact, some authors demonstrated that biologics tended to be cost-efficient, especially in patients who are poorly controlled with the standard of care, while several studies have underlined that costs might be better justified if pharmaceutical companies lowered prices and if clinicians focused more on subgroups such as clear responders and those requiring more frequent SCS prescriptions. We agree on the extreme need to plan cost-efficacy studies evaluating the long-term use of biologics compared with the current standard of care for CRSwNP. Total costs of the disease account for direct and indirect costs, where direct costs refer to health care costs and indirect costs refer to lost productivity. As demonstrated in other chronic diseases, the indirect costs of CRSwNP are much greater than the direct costs because patients are usually of working age . Recently, some authors have demonstrated significant improvement in productivity after treatment of CRS and reduction of indirect costs. Likewise, if biologics are effective, they may reduce the costs related to the burden of CRSwNP. Finally, the cost of disease needs to consider the disease time horizon and in particular the interval time in which the patients will probably be burdened with lifelong disease. Therefore, as with any chronic condition, we cannot just focus cost estimations on short time intervals, even if long-term cost calculations and modeling are unfortunately very difficult to estimate. The commission agrees that future studies should be planned about the cost effectiveness of new drugs.
Chronic rhinosinusitis (CRS) comprises a spectrum of conditions with distinct clinical presentations and pathogenic mechanisms , , . For years it has been adopted clinical dichotomisation of CRS without NP (CRSsNP) and CRSwNP assuming that it was determined by predominantly T-helper 1 cells in the former and T-helper 2 cells in the latter. However, further research demonstrated that immunologic profile is much more complex, demonstrating that there is some overlap and endotypes may coexist in the same patient. In fact, non-eosinophilic inflammation dominated by Th1/Th17 pathways may be associated with CRSwNP and CRSsNP patients may express a Type 2 cytokine profile . Because studies on endotyping provided full insight into the underlying cellular and molecular inflammatory mechanisms associated with CRS , the EPOS 2020 group came into the decision to change the management approach to CRS. The authors recognised the importance to move away from differentiating management basing on phenotypical classification between CRSsNP and CRSwNP towards a new classification based on the disease being localised (often unilateral) or diffuse (always bilateral). Both these groups are further divided basing the endotype into type 2 or non-type 2 disease. In case of more endotypes coexisting in the same patient, the authors suggested to identify the dominant one in order to establish the best personalised therapeutic approach. Approximately 80% of diffuse CRS in Western countries are characterised by a dominant Type 2 response driven mainly by key Type 2 cytokines (IL4, IL5, IL13 etc) and circulating/local IgE, with eosinophilia as a typical signature . Currently, both an allergic (IgE-mediated) and non-allergic pathway are understood to play a role in the pathophysiology of the underlying eosinophilia, representing the ideal immune profile of severe CRSwNP potentially candidates for biologics. For this reason, recent position papers , suggested to provide confirmatory evidence of Type 2 inflammation in these patients using systemic eosinophil and IgE count. It has also demonstrated that the amount of local eosinophilic infiltration and the overall intensity of the inflammatory response are closely related to the prognosis and severity of disease . For this reason, development of institutional protocols for sampling, storing and processing sino-nasal mucosa samples, sometimes in close collaboration with histopathologists is increasing . Actually, authors discuss on the best procedure to define local inflammation and the most common used techniques include: nasal biopsy, nasal brushing or scraping (nasal cytology), nasal lavage fluid and nasal suctioning of secretions . Authors have suggested that diagnosis of eosinophilic CRS requires quantification of the numbers of eosinophils, i.e. number/high powered field (hpf), which may vary in the literature (8-12/hpf) and which should be achieved by analysing at least three of the most dense collections of eosinophils (very rich fields) in the samples counted at hpf (~400x). The EPOS steering group specified that the minimal cut-off to achieve evidence of Type 2 inflammation on tissue samples was eosinophils > 10/hpf. The cut-offs for the other procedures had not be established and specific studies are required to determine it. Other biomarkers used at the moment to define type 2 disease are blood eosinophilia, IgE levels and, in some specialised centers, periostin. EPOS group suggests as specific cut offs for this biomarker: > 250/microliter for blood eosinophilia and > 100 kU/l for total IgE. Other biomarkers are currently under investigation and may provide further guidance in the future. The combination of phenotyping (responsiveness to different treatments, including systemic or intranasal corticosteroids, surgical interventions, comorbid asthma, N-ERD, etc.) and endotyping [blood/local eosinophils or neutrophils, TH-cell populations, levels of cytokines (IL-4, IL-5, or IL-13, etc.), IgE either in blood or tissue, anti-staphylococcal IgE, periostin and other future potential biomarkers] are at the moment the best way to predict the likely natural course of disease and prognosis in terms of disease control after surgery. Based on this concept, many authors , , have tried to identify the best way to predict the natural history of the disease, facilitating counseling the patient on the expected outcome of the surgery and helping to establish the best postoperative medical management that can offer the best chance to control patient’s symptoms. Finally, identification of endotypes is essential for individualisation of therapy. Adequate endotyping and phenotyping of the disease should be refined in an additional work-up in all severe uncontrolled CRSwNP patients. ENT physicians involved in prescription of biologics in rhinology centers should standardise the diagnostic work-up for severe uncontrolled CRSwNP strengthening multidisciplinary cooperation to define endotype of the disease and eligibility for biologics. Rhinologic centers should develop institutional protocols for determination of type 2 inflammation associated with severe uncontrolled CRSwNP. Clinical predictors of treatment outcomes are useful to foresee the likely natural course of disease and facilitate counseling of patients on expected outcomes of standard of care treatments.
Monoclonal antibodies have been demonstrated to be very useful in the management of chronic eosinophilic diseases such as asthma and atopic dermatitis; the experience in these fields encouraged researchers to investigate efficacy of these drugs in CRSwNP. Proof-of- concept studies were performed mainly in patients with severe asthma and nasal polyps, generating promising results and building upon successful phase 3 studies . The pathophysiology of CRSwNP includes eosinophilia, T-helper cell 2 cytokines and IgE formation, and for this reason three main strategies may be undertaken with monoclonal antibodies: anti-IL-4/IL-13 signaling (dupilumab), anti-IL-5 pathways (mepolizumab, benralizumab) and anti-IgE antibodies (omalizumab). In we summarise mechanism of action, possible side effects, dose, and administration modalities of most promising biologics in the treatment of CRSwNP. Furthermore, we reviewed literature evidence about their significative steps in the approval process ( ). Anti-IL-4/IL-13: Dupilumab Dupilumab is a fully human monoclonal antibody targeting the α -chain subunit of IL-4 receptors (Type 1 and type 2 IL-4R α ) and inhibiting IL-4/IL-13 signaling . Literature data have demonstrated that the dual inhibition of IL-4 and IL-13 signaling may represent an important strategy for the treatment of type 2 CRSwNP. Bachert et al. in a phase II, randomised, double-blind, placebo-controlled study evaluated dupilumab in patients with CRSwNP refractory to INCS. Patients (n = 60) were randomised to 2 weekly subcutaneous dupilumab injections or placebo, and 51 patients completed the study. The group treated with dupilumab had a significant reduction in polyp size (primary endpoint), which was clinically observable from the 4th week of treatment. Later, the SINUS-24 and SINUS-52 phase 3 studies demonstrated the efficacy and safety of subcutaneous dupilumab 300 mg administered every 2 weeks versus placebo in severe CRSwNP not controlled with standard of care (INC, previous SCS and/or surgery). Patients obtained significant improvements in all primary and secondary endpoints at week 24 and 52. A significant improvement was observed in treated patients compared to placebo in terms of nasal congestion/obstruction severity, nasal polyps score (NPS), sinus opacification and loss of smell. For the two primary endpoints, NPS and NCS, significant improvement was observed as early as week 4 of treatment. For UPSIT score, significant improvement was observed at week 2 of observation, with continued improvement evident up to the end of treatment in both studies for all endpoints. For loss of smell, 62% of patients treated with dupilumab changed their smell status from anosmic to non-anosmic. Lastly, dupilumab treatment resulted in a significant reduction of SCS use and the need for revision surgery compared to placebo. Supporting dupilumab’s mechanism of action, analyses of biomarkers in patients treated with dupilumab in SINUS-52 showed a consistent decrease in concentrations of serum total IgE, periostin, TARC and plasma eotaxin-3 at weeks 24 and 52 and in concentrations of ECP, total IgE, eotaxin-3, and IL-5 in nasal secretions at week 24. Furthermore, in SINUS-24, the suspension of dupilumab vs placebo at week 24 led to loss of efficacy on all endpoints observed up to week 48. Finally, literature data supports the benefits of adding dupilumab to daily standard of care in patients with CRSwNP as a novel approach in treating the entire spectrum of clinical manifestations of the disease, as well as the frequently associated type 2 lower airway comorbidities . Dupilumab was the first biologic approved by the Food and Drug Administration (FDA) on June 26 th , 2019 to treat in adults with CRSwNP not adequately controlled. The European Medicines Agency (EMA) released a favourable opinion on dupilumab on October 26th, 2019 as add-on therapy with INCS for the treatment of adults with severe CRSwNP for whom therapy with systemic cosrticosteroids and/or surgery do not provide adequate disease control. In Italy, dupilumab was approved by the Italian Agency of Drugs (AIFA) on December 9 th , 2020 for adult patients with severe CRSwNP (assessed by an NPS score ≥ 5 or a SNOT-22 score ≥ 50) for whom therapy with SCS and/or surgery do not provide adequate disease control, in addition to background therapy with INCS. Omalizumab (anti-IgE antibody) Omalizumab is the longest-lived monoclonal antibody approved since 2003 for the treatment of moderate to severe persistent allergic asthma in more than 90 countries . It was designed to treat IgE-mediated disease by reducing the concentration of free IgE in blood and tissue . Given multiple potential mechanisms by which omalizumab may limit Type 2 inflammation it was investigated not only in asthma but also in CRSwNP. Phase III trials (POLYP 1 and POLYP 2) were conducted in parallel to evaluate the efficacy and safety of omalizumab in adults with severe uncontrolled CRSwNP refractory to treatment with INCS. The trials compared the effects of omalizumab (75-600 mg s.c. every 2 or 4 weeks, adjusted according to pre-treatment serum IgE and body weight) to placebo in patients with severe CRSwNP not controlled with standard of care background therapy by INCS. Both POLYP-1 (n = 138) and POLYP-2 (n = 127) met their co-primary endpoints: omalizumab-treated patients achieved statistically significant improvements in mean NPS and daily NCS at Week 24 versus placebo. Moreover, the improvements were observed as early as week 4 in both studies, demonstrating a rapid effect and maintained in time. Key secondary endpoints were also met including SNOT-22, total nasal symptom score (TNSS), sense of smell (assessed by UPSIT), posterior and anterior rhinorrhea scores for post-nasal drip and runny nose. Improvements above placebo were observed for most secondary endpoints as early as Week 4 (Week 8 for UPSIT) and were maintained over the 24-week treatment period. In addition, reduced need for surgery by Week 24 (NPS of ≤ 4 and MCID improvement in SNOT-22) was observed in 19% of omalizumab-treated patients versus 3% of placebo-treated patients in POLYP-1 and 17% versus 3% in POLYP-2 . An open-label extension study for participants in POLYP-1 and POLYP-2 studies was conducted to evaluate the safety, efficacy and durability of response of omalizumab in adult patients with CRSwNP and inadequate responders to INCS. Patients who completed either POLYP-1 or POLYP-2 were eligible for this study (n = 249). All patients received treatment with omalizumab for 28 weeks, followed by a 24-week period off treatment to assess the recurrence of nasal polyposis. The extension study results show that omalizumab treated patients improved in term of NPS and SNOT-22 scores. On the other hand, when ceasing the treatment NPS, NCS and SNOT-22 progressively worsened, although they never returned to pre-treatment levels. Therefore, long-term benefits of the therapy have been demonstrated. Omalizumab was generally well tolerated with overall rates of adverse events (AE) comparable to those observed in previous Phase III trials . No new or unexpected AEs were observed. On November 30 th , 2019, the FDA approved omalizumab for the treatment of CRSwNP. Furthermore, the EMA gave a favourable opinion of omalizumab on July 7 th , 2020 in Europe. Biologics Targeting IL-5 pathways (Mepolizumab, Benralizumab) Mepolizumab The clinical development programme of mepolizumab in CRWwNP was composed of two phase 2 placebo-controlled studies that evaluated intravenous mepolizumab 750 mg in patients with severe nasal polyps , , and by the phase 3 SYNAPSE study which investigated the efficacy and safety of subcutaneous mepolizumab 100 mg administered via pre-filled syringe in adult CRWwNP . Bachert et al. in the phase II study evaluated intravenous mepolizumab 750 mg every 4 weeks in 105 patients with severe bilateral CRSwNP requiring surgery according to predefined criteria (NPS > 3 or more in 1 nostril and a VAS > 7). The authors demonstrated that mepolizumab led to a significant reduction in the need for surgery and a significant improvement of symptoms vs placebo. Gevaert et al. evaluated intravenous mepolizumab 750 mg every 4 weeks in 30 adults with severe uncontrolled CRSwNP. Mean total nasal polyp score was significantly improved in 60% of mepolizumab-treated patients compared to 10% of the placebo group. Howarth et al. described results of a post hoc analysis of the MUSCA study and a meta-analysis of MUSCA and MENSA ; their combined objective was to determine the change in HRQOL in mepolizumab-treated patients with severe eosinophilic asthma (SEA) with or without NP. For the MUSCA post hoc analysis, 422 patients completed the SNOT-22 questionnaire at baseline and were included. Overall, 19% of patients (n = 80) had NP; in these patients mepolizumab and placebo significantly reduced the mean SNOT-22 from baseline to week 24. For the meta-analysis of MENSA/MUSCA, 166 of 936 patients (18%) had NP at screening. Patients with SEA and concomitant NP had a phenotype that showed greater benefit with mepolizumab compared with patients with SEA in the absence of NP. The phase 3 SYNAPSE study was a 52-week, randomised, double-blind, placebo-controlled, parallel group study of subcutaneous mepolizumab 100 mg in 407 adult patients with highly symptomatic CRSwNP uncontrolled by previous surgery and treated with INCS. Eligible patients had at least 1 prior surgery in the past 10 years, recurrent nasal polyps despite treatment with standard of care and in need of nasal polyp surgery (overall VAS > 7 and an NPS of at least 5 with a minimum score of 2 in each side). The results were presented firstly at the congress of the European Respiratory Society, September 7-9, 2020 . Mepolizumab 100 mg administered subcutaneously demonstrated significant improvement in terms of size of nasal polyps and nasal obstruction at week 52 compared with placebo. Based on these data, in October 2020, EMA accepted regulatory submissions seeking approval for the use mepolizumab in CRWwNP. Mepolizumab is currently not indicated for the treatment of CRSwNP. Benralizumab Benralizumab is a humanised monoclonal antibody that binds to the alpha subunit of the IL-5 receptor (IL-5R or CD125) which is expressed on different cells like eosinophils, basophils and type-2 innate lymphoid cells (ILC2). The mechanism of action of benralizumab, different from other monoclonal antibodies binding IL-5, is not limited to interference with IL-5 inflammatory pathways. Indeed, benralizumab is able to induce an antibody-dependent cellular cytotoxicity (ADCC) by binding to the Fc γ RIII α receptor expressed on natural killer (NK) cells. This second mechanism of action produces a direct, rapid and nearly complete eosinophil depletion both in peripheral blood and bronchial tissue . The Phase III studies, SIROCCO and CALIMA , , demonstrated the efficacy and safety of benralizumab in significantly reducing annualised exacerbations rates, improving lung function and disease control vs placebo as add-on therapy to high-dosage ICS/LABA in patients with SEA and blood eosinophil counts ≥ 300 cells/microliter. A growing body of evidence suggests that benralizumab may exert a rapid and effective therapeutic action in patients with SEA and concomitant relapsing nasal polyposis . Canonica et al. presented the results of a sub-study of ANDHI phase III-b trial at EAACI congress in 2020, involving 153 patients with SEA and CRSwNP as comorbidity, demonstrating the efficacy of benralizumab in improving SNOT-22 scores. Clinically relevant improvements in CRSwNP symptoms were observed following the first dose and maintained over time. Real world studies and case reports have confirmed the efficacy and safety of benralizumab in this population in clinical practice. Lombardo et al. assessed a cohort of 10 SEA patients with CRSwNP treated with benralizumab, demonstrating significant reduction of endoscopic Nasal Polyp Score (NPS), Lund-Mackay Score and SNOT-22 after 24 weeks. Bagnasco et al. in a real-world evaluation in 34 patients with SEA and CRSwNP, confirmed the effectiveness of benralizumab on SNOT-22 reduction, with 8/26 patients (31%) recovering from anosmia after 6 months of treatment. In a phase II randomised, double-blind, placebo-controlled 20-week trial , benralizumab led to significant improvement in endoscopic NPS, CT score, SNOT-22 and UPSIT score vs baseline in severe CRSwNP patients refractory to standard therapies with at least one previous polypectomy. These results suggested that benralizumab, which targets eosinophils directly, may have a role in the treatment of patients with severe uncontrolled CRSwNP. Currently, a Phase III development programme which includes the completed OSTRO study and the ongoing ORCHID trial is assessing the efficacy and safety of benralizumab in patients with severe CRSwNP with or without asthma. On September 2020, a press release revealed that the OSTRO study Benralizumab met both its co-primary endpoints of reduced nasal polyp size and nasal congestion score (NCS) vs placebo as add-on therapy to standard of care in patients with severe bilateral nasal polyposis. Benralizumab for use in CRSwNP is expected to be approved in the next few years. Basing on the data of phase 3 studies, biologics approved and oncoming in the next future should be considered as add-on therapy to local corticosteroids when control of the disease is not achieved even after oral corticosteroids and/or surgery. All the members of the committee agree that biologics are recommended when Type 2 inflammation is highly likely to be the dominant endotype of severe uncontrolled CRSwNP. ENT physicians involved in the prescriptions of biologics should have a clear understanding not only about sino- nasal inflammatory patterns driving diffuse CRSwNP, but also about the mechanism of action, possible side effects, dose and administration modalities of biologics.
Dupilumab is a fully human monoclonal antibody targeting the α -chain subunit of IL-4 receptors (Type 1 and type 2 IL-4R α ) and inhibiting IL-4/IL-13 signaling . Literature data have demonstrated that the dual inhibition of IL-4 and IL-13 signaling may represent an important strategy for the treatment of type 2 CRSwNP. Bachert et al. in a phase II, randomised, double-blind, placebo-controlled study evaluated dupilumab in patients with CRSwNP refractory to INCS. Patients (n = 60) were randomised to 2 weekly subcutaneous dupilumab injections or placebo, and 51 patients completed the study. The group treated with dupilumab had a significant reduction in polyp size (primary endpoint), which was clinically observable from the 4th week of treatment. Later, the SINUS-24 and SINUS-52 phase 3 studies demonstrated the efficacy and safety of subcutaneous dupilumab 300 mg administered every 2 weeks versus placebo in severe CRSwNP not controlled with standard of care (INC, previous SCS and/or surgery). Patients obtained significant improvements in all primary and secondary endpoints at week 24 and 52. A significant improvement was observed in treated patients compared to placebo in terms of nasal congestion/obstruction severity, nasal polyps score (NPS), sinus opacification and loss of smell. For the two primary endpoints, NPS and NCS, significant improvement was observed as early as week 4 of treatment. For UPSIT score, significant improvement was observed at week 2 of observation, with continued improvement evident up to the end of treatment in both studies for all endpoints. For loss of smell, 62% of patients treated with dupilumab changed their smell status from anosmic to non-anosmic. Lastly, dupilumab treatment resulted in a significant reduction of SCS use and the need for revision surgery compared to placebo. Supporting dupilumab’s mechanism of action, analyses of biomarkers in patients treated with dupilumab in SINUS-52 showed a consistent decrease in concentrations of serum total IgE, periostin, TARC and plasma eotaxin-3 at weeks 24 and 52 and in concentrations of ECP, total IgE, eotaxin-3, and IL-5 in nasal secretions at week 24. Furthermore, in SINUS-24, the suspension of dupilumab vs placebo at week 24 led to loss of efficacy on all endpoints observed up to week 48. Finally, literature data supports the benefits of adding dupilumab to daily standard of care in patients with CRSwNP as a novel approach in treating the entire spectrum of clinical manifestations of the disease, as well as the frequently associated type 2 lower airway comorbidities . Dupilumab was the first biologic approved by the Food and Drug Administration (FDA) on June 26 th , 2019 to treat in adults with CRSwNP not adequately controlled. The European Medicines Agency (EMA) released a favourable opinion on dupilumab on October 26th, 2019 as add-on therapy with INCS for the treatment of adults with severe CRSwNP for whom therapy with systemic cosrticosteroids and/or surgery do not provide adequate disease control. In Italy, dupilumab was approved by the Italian Agency of Drugs (AIFA) on December 9 th , 2020 for adult patients with severe CRSwNP (assessed by an NPS score ≥ 5 or a SNOT-22 score ≥ 50) for whom therapy with SCS and/or surgery do not provide adequate disease control, in addition to background therapy with INCS.
Omalizumab is the longest-lived monoclonal antibody approved since 2003 for the treatment of moderate to severe persistent allergic asthma in more than 90 countries . It was designed to treat IgE-mediated disease by reducing the concentration of free IgE in blood and tissue . Given multiple potential mechanisms by which omalizumab may limit Type 2 inflammation it was investigated not only in asthma but also in CRSwNP. Phase III trials (POLYP 1 and POLYP 2) were conducted in parallel to evaluate the efficacy and safety of omalizumab in adults with severe uncontrolled CRSwNP refractory to treatment with INCS. The trials compared the effects of omalizumab (75-600 mg s.c. every 2 or 4 weeks, adjusted according to pre-treatment serum IgE and body weight) to placebo in patients with severe CRSwNP not controlled with standard of care background therapy by INCS. Both POLYP-1 (n = 138) and POLYP-2 (n = 127) met their co-primary endpoints: omalizumab-treated patients achieved statistically significant improvements in mean NPS and daily NCS at Week 24 versus placebo. Moreover, the improvements were observed as early as week 4 in both studies, demonstrating a rapid effect and maintained in time. Key secondary endpoints were also met including SNOT-22, total nasal symptom score (TNSS), sense of smell (assessed by UPSIT), posterior and anterior rhinorrhea scores for post-nasal drip and runny nose. Improvements above placebo were observed for most secondary endpoints as early as Week 4 (Week 8 for UPSIT) and were maintained over the 24-week treatment period. In addition, reduced need for surgery by Week 24 (NPS of ≤ 4 and MCID improvement in SNOT-22) was observed in 19% of omalizumab-treated patients versus 3% of placebo-treated patients in POLYP-1 and 17% versus 3% in POLYP-2 . An open-label extension study for participants in POLYP-1 and POLYP-2 studies was conducted to evaluate the safety, efficacy and durability of response of omalizumab in adult patients with CRSwNP and inadequate responders to INCS. Patients who completed either POLYP-1 or POLYP-2 were eligible for this study (n = 249). All patients received treatment with omalizumab for 28 weeks, followed by a 24-week period off treatment to assess the recurrence of nasal polyposis. The extension study results show that omalizumab treated patients improved in term of NPS and SNOT-22 scores. On the other hand, when ceasing the treatment NPS, NCS and SNOT-22 progressively worsened, although they never returned to pre-treatment levels. Therefore, long-term benefits of the therapy have been demonstrated. Omalizumab was generally well tolerated with overall rates of adverse events (AE) comparable to those observed in previous Phase III trials . No new or unexpected AEs were observed. On November 30 th , 2019, the FDA approved omalizumab for the treatment of CRSwNP. Furthermore, the EMA gave a favourable opinion of omalizumab on July 7 th , 2020 in Europe.
Mepolizumab The clinical development programme of mepolizumab in CRWwNP was composed of two phase 2 placebo-controlled studies that evaluated intravenous mepolizumab 750 mg in patients with severe nasal polyps , , and by the phase 3 SYNAPSE study which investigated the efficacy and safety of subcutaneous mepolizumab 100 mg administered via pre-filled syringe in adult CRWwNP . Bachert et al. in the phase II study evaluated intravenous mepolizumab 750 mg every 4 weeks in 105 patients with severe bilateral CRSwNP requiring surgery according to predefined criteria (NPS > 3 or more in 1 nostril and a VAS > 7). The authors demonstrated that mepolizumab led to a significant reduction in the need for surgery and a significant improvement of symptoms vs placebo. Gevaert et al. evaluated intravenous mepolizumab 750 mg every 4 weeks in 30 adults with severe uncontrolled CRSwNP. Mean total nasal polyp score was significantly improved in 60% of mepolizumab-treated patients compared to 10% of the placebo group. Howarth et al. described results of a post hoc analysis of the MUSCA study and a meta-analysis of MUSCA and MENSA ; their combined objective was to determine the change in HRQOL in mepolizumab-treated patients with severe eosinophilic asthma (SEA) with or without NP. For the MUSCA post hoc analysis, 422 patients completed the SNOT-22 questionnaire at baseline and were included. Overall, 19% of patients (n = 80) had NP; in these patients mepolizumab and placebo significantly reduced the mean SNOT-22 from baseline to week 24. For the meta-analysis of MENSA/MUSCA, 166 of 936 patients (18%) had NP at screening. Patients with SEA and concomitant NP had a phenotype that showed greater benefit with mepolizumab compared with patients with SEA in the absence of NP. The phase 3 SYNAPSE study was a 52-week, randomised, double-blind, placebo-controlled, parallel group study of subcutaneous mepolizumab 100 mg in 407 adult patients with highly symptomatic CRSwNP uncontrolled by previous surgery and treated with INCS. Eligible patients had at least 1 prior surgery in the past 10 years, recurrent nasal polyps despite treatment with standard of care and in need of nasal polyp surgery (overall VAS > 7 and an NPS of at least 5 with a minimum score of 2 in each side). The results were presented firstly at the congress of the European Respiratory Society, September 7-9, 2020 . Mepolizumab 100 mg administered subcutaneously demonstrated significant improvement in terms of size of nasal polyps and nasal obstruction at week 52 compared with placebo. Based on these data, in October 2020, EMA accepted regulatory submissions seeking approval for the use mepolizumab in CRWwNP. Mepolizumab is currently not indicated for the treatment of CRSwNP.
Benralizumab is a humanised monoclonal antibody that binds to the alpha subunit of the IL-5 receptor (IL-5R or CD125) which is expressed on different cells like eosinophils, basophils and type-2 innate lymphoid cells (ILC2). The mechanism of action of benralizumab, different from other monoclonal antibodies binding IL-5, is not limited to interference with IL-5 inflammatory pathways. Indeed, benralizumab is able to induce an antibody-dependent cellular cytotoxicity (ADCC) by binding to the Fc γ RIII α receptor expressed on natural killer (NK) cells. This second mechanism of action produces a direct, rapid and nearly complete eosinophil depletion both in peripheral blood and bronchial tissue . The Phase III studies, SIROCCO and CALIMA , , demonstrated the efficacy and safety of benralizumab in significantly reducing annualised exacerbations rates, improving lung function and disease control vs placebo as add-on therapy to high-dosage ICS/LABA in patients with SEA and blood eosinophil counts ≥ 300 cells/microliter. A growing body of evidence suggests that benralizumab may exert a rapid and effective therapeutic action in patients with SEA and concomitant relapsing nasal polyposis . Canonica et al. presented the results of a sub-study of ANDHI phase III-b trial at EAACI congress in 2020, involving 153 patients with SEA and CRSwNP as comorbidity, demonstrating the efficacy of benralizumab in improving SNOT-22 scores. Clinically relevant improvements in CRSwNP symptoms were observed following the first dose and maintained over time. Real world studies and case reports have confirmed the efficacy and safety of benralizumab in this population in clinical practice. Lombardo et al. assessed a cohort of 10 SEA patients with CRSwNP treated with benralizumab, demonstrating significant reduction of endoscopic Nasal Polyp Score (NPS), Lund-Mackay Score and SNOT-22 after 24 weeks. Bagnasco et al. in a real-world evaluation in 34 patients with SEA and CRSwNP, confirmed the effectiveness of benralizumab on SNOT-22 reduction, with 8/26 patients (31%) recovering from anosmia after 6 months of treatment. In a phase II randomised, double-blind, placebo-controlled 20-week trial , benralizumab led to significant improvement in endoscopic NPS, CT score, SNOT-22 and UPSIT score vs baseline in severe CRSwNP patients refractory to standard therapies with at least one previous polypectomy. These results suggested that benralizumab, which targets eosinophils directly, may have a role in the treatment of patients with severe uncontrolled CRSwNP. Currently, a Phase III development programme which includes the completed OSTRO study and the ongoing ORCHID trial is assessing the efficacy and safety of benralizumab in patients with severe CRSwNP with or without asthma. On September 2020, a press release revealed that the OSTRO study Benralizumab met both its co-primary endpoints of reduced nasal polyp size and nasal congestion score (NCS) vs placebo as add-on therapy to standard of care in patients with severe bilateral nasal polyposis. Benralizumab for use in CRSwNP is expected to be approved in the next few years. Basing on the data of phase 3 studies, biologics approved and oncoming in the next future should be considered as add-on therapy to local corticosteroids when control of the disease is not achieved even after oral corticosteroids and/or surgery. All the members of the committee agree that biologics are recommended when Type 2 inflammation is highly likely to be the dominant endotype of severe uncontrolled CRSwNP. ENT physicians involved in the prescriptions of biologics should have a clear understanding not only about sino- nasal inflammatory patterns driving diffuse CRSwNP, but also about the mechanism of action, possible side effects, dose and administration modalities of biologics.
Several trials have investigated the efficacy of biologics in the treatment of CRSwNP with encouraging results. The approval of some biologics by the FDA in the treatment of severe uncontrolled CRSwNP even without asthma has stimulated discussion in the medical community, expecting a quick entry in the market not only for dupilumab, but also for other monoclonal antibodies. For this reason, recent guidelines , gave full consideration about selection criteria of the ideal candidate for biologics and their place in current care pathways. In 2019, the EUFOREA team suggested for the first time five criteria as crucial to select CRSwNP patients who are eligible for biologics. In February 2020, EPOS guidelines further defined these criteria introducing specific cut-offs: evidence of type 2 disease (tissue eosinophils ≥ 10/hpf or blood eosinophils ≥ 250/microliter or total IgE ≥ 100), need for at least two courses of SCS per year or long term (> 3 months) low dose steroids or contraindication to systemic steroids, significantly impaired quality of life (SNOT-22 ≥ 40), anosmic on smell test and/or comorbid asthma needing regular inhaled corticorsteroid. EPOS 2020 concluded that biologics are indicated in patients with bilateral nasal polyps, who had sinus surgery or were not fit for surgery and who had three of the listed criteria. The authors were involved in an extensive discussion of whether there was a role for biologics in patients without previous sinus surgery accepting that it was possible in exceptional circumstances. Criteria established by current guidelines , refers use of biologics in patients with severe and uncontrolled CRSwNP bringing to light the increasing necessity of identifying subgroups of patients who are eligible for biologics and of a clear definition of severe uncontrolled CRSwNP. The concept of disease control has been a major critical point to optimise CRS management and was introduced for the first time at EPOS 2012 combining the following parameters: control of the four major sino-nasal symptoms (nasal blockage, rhinorrhoea/postnasal drip, facial pain/pressure, smell), sleep disturbance and/or fatigue, endoscopic aspect of nasal mucosa and medical intake. EPOS2020 criteria specified that the 4 major symptoms should be specifically related to CRS and not to other reasons . EPOS 2020 assumed as “difficult-to- treat” those in whom an acceptable level of control was not achieved despite appropriate surgery, INCS, and up to 2 brief courses of antibiotics or SCS in the last year, or long term (> 3 months) low dose steroids. The EPOS 2020 panel defined “short” courses of SCS as at least 7-21 days. In the latest EUFOREA 2020 , “uncontrolled CRSwNP” was defined as “persistent or recurring despite long-term INCS and having received at least one course of SCS in the preceding 2 years (or having a medical contraindication or intolerance to SCS) and/or previous sinonasal surgery (unless having a medical contraindication or being unwilling to undergo surgery)”. The EUFOREA group suggested that a short course of oral corticosteroids should be of a minimum of 5 days at a dose of 0.5-1 mg/kg/day or more. In this last definition, the need for corticosteroids was lowered based on evaluation of baseline criteria of subjects included in the Phase 3 studies. The EUFOREA group further confirmed that long term low does SCS are not recommended for CRSwNP. This panel believes that a specific discussion should be opened by the medical community on the right dose of SCS to consider as maximal per year. Given the considerable variability of ENT physicians in prescribing SCS in terms of daily dose and length of short courses, we believe that it may be more appropriate to refer to the yearly cumulative dose in the last year as for asthma patients. Bourdin et al. in fact suggested that “a yearly cumulative OCS dose above 1 gram should be considered unacceptable in severe asthma and should make the case for referral”. The concept of severity of disease over the years has been mainly based on the impact of disease on quality of life and its local extension. Because CRSwNP has a wide variability of presentation and the severity may vary significantly between individuals, several authors investigated how to measure it and its definition is becoming increasingly important. Validated QOL markers have been utilised to identify eligible CRSwNP patients for Phase 3 studies with biologics, with VAS and SNOT-22 being the most commonly used; for this reason, they are currently adopted to define severe CRSwNP , . Several nasal polyp endoscopic scoring systems have been described over the years , until a total NPS was recently developed and standardised . It has served as a co-primary outcome in clinical trials of biologics, the results are reproducible and responsive to change in severe disease and it is the most common used to evaluate the size of nasal polyps. Equally, the Lund-Mackay radiological score allows reliable assessment of the extent of disease, and like endoscopy is easily repeatable . Evaluation of olfaction is always important to define severity of the disease. UPSIT is the standard clinical test used in United States, whereas the Sniffin’ Sticks in Europe , . Both have high test-retest reliability, normative values by age and sex, and are widely used in research and clinical practice. Nasal airflow may easily be measured by peak nasal inspiratory flow (PNIF) that is an objective measure of airflow and closely correlated with nasal airway resistance. PNIF is simple to obtain, and the devices are inexpensive and can be used for repeated measurements . The EPOS steering group identified as cut offs for severe CRSwNP a VAS > 7, SNOT-22 > 40 and NPS > 5. Furthermore, the EPOS guidelines suggested that also olfaction evaluation was an important parameter to take into consideration suggesting as cut-offs the specific ones for the test used and indicating a picture of anosmia. Recently, the expert EUFOREA panel lowered this parameter as follows: SNOT-22 > 35, loss of smell score (0-3) > 2 points or VAS ≥ 5 and NPS ≥ 4. The members of this committee believe that particular attention should be paid about cut offs of severe CRSwNP. Some concerns have been raised about this new proposed endoscopic score cut-off (NPS > 4) which seems to more properly reflect a moderate picture. Considering the fact that quality of life parameters were also lowered, we believe that future considerations should be made about this topic. The members of the committee agree that given the importance of measuring the severity of the disease, particular attention should be paid to this aspect. The ENT should always be familiar with the most common severity indicators that should routinely be adopted in clinical practice. Future debates should be opened about the maximal yearly SCS dose and specific cut offs for the definition of severe uncontrolled CRwNP.
The recent scientific evidence clearly underlines the link between Type 2 diseases, leading to implement multidisciplinary evaluation in Type 2 inflammatory conditions. CRS healthcare often requires support from other specialists especially in severe cases. The collaboration with an allergologist, pneumologist, immunologist and rheumatologist is crucial to define endotype of the disease and coexisting Type 2 comorbidities such as atopic dermatitis, eosinophilic esophagitis or gastroenteritis, N-ERD, allergic fungal rhinosinusitis, Churg Struss Syndrome etc. , . In the context of a multidisciplinary approach, the central role of the ENT in the management of CRSwNP should be underlined. The ENT has a crucial role firstly in the confirmation of the disease, in evaluating previous surgical treatment and measuring severity of the disease. Endoscopy should be considered a mainstay in the diagnosis of CRSwNP to perform an adequate phenotyping, accurate staging of the disease and adequate differential diagnosis. It should be noted that the possible coexistence of inverted papilloma and diffuse CRS with nasal polyps should be always excluded even if rare . CT scan without endoscopy is not sufficient to confirm the diagnosis of CRSwNP. Particular attention should be paid in the definition of the severity of comorbidities. Biologics for CRSwNP and concomitant severe asthma should be mainly managed by asthma specialists, while on the other hand for patients with severe uncontrolled CRSwNP without asthma/mild moderate asthma the role of the ENT specialist should be central. Close collaboration is always recommended to manage comorbid patients.
The commission believes that the role of surgery should not be underestimated, but rather that its role should be reconsidered in the light of new therapeutic opportunities. ESS usually leads to a very quick relief of symptoms and in particular of nasal obstruction, and it further improves control of the disease obtained by long term local corticosteroids. Sinuses are, in fact, better accessible to local treatments after surgery increasing disease control by long term use of INCS (in 60%-70% of cases, disease does not recur within 5 years) , . For this reason, it is very important to distinguish between first-time and revision surgery. Another crucial factor that may influence the decision-making algorithm is the coexistence of other Type 2 comorbidities and in particular asthma (the one most associated with CRSwNP). The severity of comorbidities should be established because different scenarios may be faced that need to be assessed separately , . Patients with severe uncontrolled CRSwNP mainly managed by medical treatment and never treated by surgery with or without mild moderate asthma If a patient has never undergone surgery, ESS should be taken into consideration because it improves control of the disease by INCS spreading their distribution to all sinonasal mucosa. Based on this assumption one could infer that if patients never received surgery probably control by INCS may not be fully achieved , . The members of the commission believe that in a patient with uncontrolled severe CRSwNP treated mainly with long term INCS and brief cycles of SCS and who never received surgery, ESS should be taken into consideration as first line treatment, although the following circumstances should be considered as limitations: contraindications to surgery because of patient’s general condition (severe cardiopathy, severe haemorrhagic risk, high risks for general anaesthesia etc.); patients refusing treatment by surgery; relevant side effects using INCS and SCS; patient preferences after adequate counseling on all therapeutic options. Finally, one last matter should be covered in the near future. Taking into consideration that some authors , , have demonstrated that disease control by ESS plus long-term local corticosteroids is very difficult to achieve in the presence of negative predictors of surgical outcomes (asthma, allergy, blood eosinophilia, ASA triad, high load local inflammation, specific preoperative inflammatory patterns) , some speculate that in this subgroup of patients biologics should be taken into consideration even as first line treatment. Nevertheless, at the moment, there is insufficient literature evidence to support this statement and specific trials should be properly designed to verify this hypothesis. Patients with severe CRSwNP uncontrolled after medical and surgical treatments with or without mild moderate asthma This may be a different scenario if CRSwNP patients already underwent at least one previous surgery. In this situation, the ENT specialist has a central role in clarifying if surgery was appropriate or not by a careful evaluation of CT and endoscopic findings. It is very important to consider the surgical technique used in the previous treatments. Unfortunately, literature data about rate of success surgical management of CRSwNP varies significantly mainly because authors have not differentiated patients based on their phenotypes and because they adopted different criteria to define recurrence and disease control . In addition, revision ESS rates have changed over the last decade tailoring the extent of surgery and optimising adjuvant post-operative therapy . Recurrence after a simple polypectomy should be understood in a different way compared to a patient who underwent a more extended approach. It should be careful evaluated if surgery was commensurate to the severity of the phenotype. In case of uncontrolled disease after previous appropriate surgery and good adherence to INCS the shift to a biologic should be advised. On the other hand, especially in cases in which a simple polypectomy was performed and the ethmoidal labyrinth was not adequately opened, the possibility of revision surgery should be discussed with the patient. The commission agreed that in this situation the ENT specialist should have a clear idea of which additional surgical goals may be achieved to improve access to sinus cavities including, for example, a partial middle turbinectomy if not performed previously. Another important factor to take into consideration is the timing of recurrence and control of symptoms that patients experienced over the years after surgery. Recently, some authors have demonstrated that patients presenting with a symptomatic recurrence within 3 years of surgery have a high risk of treatment failure, defined as the need for further surgery. Surgeons should distinguish between revision surgery that is required within a short period from the first procedure and a revision that is required after several years with good control of the disease. In these cases, we believe that the patient should be involved into the decision to repeat surgery or to shift towards treatment with biologics. If patients experience a long period of symptoms controlled by surgery and INCS, a revision surgery can be discussed with the patient. In this context, the presence of clinical predictors of poor surgical outcomes may help the patient and the surgeon towards the choice of biologics. Other factors may influence the choice such age of the patient and his/her preferences. In case of patients who underwent multiple surgeries with a severe impact on quality of life and who experienced a short interval of symptoms control between interventions, the use of biologic is recommended whatever the endoscopic nasal polyps score at the moment of the evaluation. Similarly, in patients already treated by surgery and who reported major complications after ESS, the shift to biologics is recommended. This committee believes that adequate counselling is always recommended in order to discuss all the alternative treatments and possibilities with the patient based on control and severity of disease. Based on the new personalised medicine requirement, patients should participate in the decision to start with a specific treatment. We believe that surgery still plays an important role not only in order to optimise control of the disease, but also for the dynamics between forces that range from international recommendations and payer policies to patient and physician preferences. Discussion should be opened about the possibility to use biologics as first line with surgery in case of very high polyps score to offer a better starting point to patients, even if there is insufficient evidence to support this hypothesis considering that there are no data comparing surgery in combination with biologics. In addition, the following recommendation of EUFOREA 2021 should be taken into consideration: “A fixed combination plan with surgery and biologic treatment starting in parallel or within a short time of one another is not advised, as the response of the individual patient to surgery or the biologic would be impossible to evaluate” . Patients with severe uncontrolled CRSwNP and comorbid uncontrolled severe asthma A proportion of patients with severe uncontrolled CRSwNP may also have a coexisting, highly disabling Type 2 disease such as severe asthma . In this situation, multidisciplinary discussion with an allergologist and pneumologist is essential and treatment with biologics should be mainly managed by them. In these patients, surgery may offer a better starting point to achieve quick relief of sino-nasal symptoms and asthma control as soon as possible, even if surgery should be delayed while verifying the efficacy of biologics on sino-nasal symptoms and reducing the nasal polyp score. Close cooperation is recommended during treatment to evaluate both efficacy on asthma and CRwNP. Surgery or shift to another biologic may be indicated if poor control of CRSwNP is observed after 4-6 months of treatment with biologics. The commission agrees that if severe asthma co-exists close cooperation with a pneumologist and allergologist is highly recommended to evaluate in a multidisciplinary fashion the best way forward in term of indications and selection of biologics.
A proportion of patients with severe uncontrolled CRSwNP may also have a coexisting, highly disabling Type 2 disease such as severe asthma . In this situation, multidisciplinary discussion with an allergologist and pneumologist is essential and treatment with biologics should be mainly managed by them. In these patients, surgery may offer a better starting point to achieve quick relief of sino-nasal symptoms and asthma control as soon as possible, even if surgery should be delayed while verifying the efficacy of biologics on sino-nasal symptoms and reducing the nasal polyp score. Close cooperation is recommended during treatment to evaluate both efficacy on asthma and CRwNP. Surgery or shift to another biologic may be indicated if poor control of CRSwNP is observed after 4-6 months of treatment with biologics. The commission agrees that if severe asthma co-exists close cooperation with a pneumologist and allergologist is highly recommended to evaluate in a multidisciplinary fashion the best way forward in term of indications and selection of biologics.
EUFOREA expert panel on 2019 first described criteria to evaluate response to biologics and specifically: reduced nasal polyps size, reduced need for SCS, improved quality of life, improved sense of smell and reduced impact of comorbidities. The same criteria were adopted by EPOS 2020 . Initially authors agreed that the first evaluation should be set at 4 months to consider an early stopping point if treatment response is lacking, due to the high cost of these medications. More recently, the EUFOREA expert panel in 2020 prolonged the first evaluation to 6 months of treatment and specified cut-offs for each criterion. The authors specified that the treatment should be followed when a clear change for at least one of the following criteria have been met: smell score increase > 0.5, NCS decrease > 0.5, NPS decrease by 1 point, SNOT-22 reduction > 8.9; VAS reduction > 2 cm. In addition, the authors recommended to discuss improvement with the patient. If patients do not accept improvement, a salvage treatment by SCS or surgery should be considered. A proportion of patients, in fact, might need surgery that the authors defined as “salvage surgery under biologic protection”, although there is limited data about long term benefit of this kind of approach. Otherwise, if the patients accept improvement even in case of a minimal response the treatment should be prolonged until 12 months when efficacy should be re-evaluated, and all the following definitions should be satisfied to follow treatment: NPS < 4; NCS < 2; VAS < 5; SNOT-22 < 30. If the criteria are not met, surgery should be performed, or a different biologic should be considered. The EUFOREA group tried to standardise the evaluation of biologic efficacy and the decision to adopt based on the results and the patient’s comfort and preferences. We believe that future considerations will probably be required to confirm these criteria or to confirm more or less stringent indications. Real-life experience will be crucial to support this shared decision-making model. The commission believes that evaluation to consider response to biologics is extremely important. All the members of the committee agree that the rhinology centres involved in the prescription of the biologics should organise the right setting for proper follow-up and assessment of response to biologics.
Biomarkers can serve as predictors of which patients will respond best to therapy and as outcome parameters during treatment in order to establish efficacy of treatment. Actually, prediction of response to biologics in an individual patient is not possible. In fact, we currently lack reliable clinical biomarkers to differentiate among CRSwNP endotypes that may differ in their response to specific biologics . In this context, specific biomarkers should be investigated; to be clinically useful, as a predictor of the response to treatment, a biomarker must be highly predictive; it is also possible that clusters of biomarkers may be able to attain high levels of predictability, but extensive work is required to advance this field especially to be ready in the near future when more biologics will be available for severe uncontrolled CRSwNP. There is no experience on the best choice of a first biologic or a second, and there are no known limitations for blood or serum parameters for CRSwNP. Finally, no head-to-head comparisons between biologics have been performed. Future work on biomarkers may yield better tests for selecting the first drug to start with. Prediction of response to biologics basing on validated biomarkers actually is not possible.
Although multiple studies have confirmed the efficacy of biologics for treatment of CRSwNP, very limited data are currently available about cost analyses of biologics compared with the current standard of care. Brown et al. critically looked at the efficacy and costs of biologic therapy for CRSwNP. They found few studies addressing this topic, reporting a more robust literature in asthma compared to CRwNP. They concluded that cost-efficacy studies are ambivalent when evaluating biologics. In fact, some authors demonstrated that biologics tended to be cost-efficient, especially in patients who are poorly controlled with the standard of care, while several studies have underlined that costs might be better justified if pharmaceutical companies lowered prices and if clinicians focused more on subgroups such as clear responders and those requiring more frequent SCS prescriptions. We agree on the extreme need to plan cost-efficacy studies evaluating the long-term use of biologics compared with the current standard of care for CRSwNP. Total costs of the disease account for direct and indirect costs, where direct costs refer to health care costs and indirect costs refer to lost productivity. As demonstrated in other chronic diseases, the indirect costs of CRSwNP are much greater than the direct costs because patients are usually of working age . Recently, some authors have demonstrated significant improvement in productivity after treatment of CRS and reduction of indirect costs. Likewise, if biologics are effective, they may reduce the costs related to the burden of CRSwNP. Finally, the cost of disease needs to consider the disease time horizon and in particular the interval time in which the patients will probably be burdened with lifelong disease. Therefore, as with any chronic condition, we cannot just focus cost estimations on short time intervals, even if long-term cost calculations and modeling are unfortunately very difficult to estimate. The commission agrees that future studies should be planned about the cost effectiveness of new drugs.
In a patient with uncontrolled severe type 2 CRSwNP, if non-effective systemic medical treatment or surgery has been performed, a long-term plan using a biologic should be contemplated together with an informed patient. This plan needs to consider the endotype, comorbidities and former treatment history (long term INCS, surgeries, SCS and their efficacy, duration of effect, and adverse events). We believe that patients who are still symptomatic despite current maximal medical therapy and surgical intervention are the main focus of treatment with biologics. Based on the new developments, the physician should properly inform the patient about available alternatives involving him/her in the clinical decisions in line with the principles of precision medicine that patients will also share in decision making. The clinical scenario may further evolve in the next months/years because other biologic will receive approval for severe CRSwNP, others are currently in the pipeline and even more targets are being identified. Future study should be oriented to characterise in which patients the single biologic may have greatest clinical efficacy. In fact, the general biomarkers of Type 2 inflammation currently adopted may help to broadly identify patients who may benefit from biologics, while ongoing research may lead to identification of new biomarkers that are useful in the selection of the right patient. For these reasons, we expect in the future that clinical algorithm and care pathways may be implemented based on improving selection criteria. Future clinical trials are needed to implement recommendations for initiation of biologics, and to compare biologics to the current standard of care and between biologic medication options. A multidisciplinary shared airway approach can possibly identify patients who may require treatment with biologics at an earlier stage in the disease process. This may have an overall positive impact on the psychological burden of the disease on patients and healthcare service. Nevertheless, current literature data do not support use of biologics at an earlier stage in the disease process of CRS, although this scenario might change in the future. At present, biologics are mainly considered only as adjunct therapy in patients with severe uncontrolled CRSwNP and evidence of Type 2 disease. The exact application of biologics will continue to evolve. Combinations of biologic therapies with surgery will be probably explored. Further research into biologics vs surgery as well as long-term disease control is required. It is likely that biologics will in time become an alternative for sinus surgery as currently performed. We believe that the role of biologics in conjunction with surgery, after surgery, or as an alternative to it, needs to be investigated further. While MAbs are well tolerated with no severe adverse effects, further research is required to determine their long-term benefits, comparability to other medical treatments and potential side effects. Physicians, patients, insurers and government payers should not ignore considerations about costs. At this time, there is minimal data examining the cost-efficacy and long-term side effects. To better understand costs, studies should be designed to evaluate if biologics may decrease costs related to severe uncontrolled CRSwNP.
We would like to thank Professor Gaetano Paludetti that encouraged and stimulated discussion of the commission about the important fields of Biologics in severe uncontrolled CRSwNP when he was President of Italian Society of Otorhinolaryngology. We finally thank Professor Paludetti for final critical revision of the manuscript.
The authors declare no conflict of interest.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
|
Evaluation of biocontrol efficacy of rhizosphere | 30acccb2-6c16-43fa-9efa-39c264e64865 | 11414977 | Microbiology[mh] | Pepper ( Capsicum annuum L.), is widely cultivated for their culinary uses. However, their production is often hindered by biotic or abiotic factors. Among biotic factors, plant pathogens cause severe damage to peppers, especially Phytophthora capsici , which is a destructive soil-borne disease for pepper cultivation . This pathogen infects pepper plants through asexually generated characteristic biflagellate, unicellular and motile zoospores from the lemon-shaped sporangia. The pathogenicity of P . capsici is distinct, and the pathogen shows resistance to multiple fungicide, making it challenging to control with traditional chemical methods . In recent years, the environmental pollution and resistance issues associated with chemical pesticides have become more severe . Biological control has emerged as a preferred strategy for sustainable agriculture due to its minimal environmental impact, long-lasting efficacy, and reduced risk of resistance development . Several microbial species with biocontrol activity against P . capsici were utilized for controlling the pathogen. Ochrobactrum pseudogrignonense NC1 significantly inhibited the mycelial growth and zoospore production of P . capsici . Bacillus cereus B1301 and Chryseobacterium sp. R98 had high biocontrol activity against P . capsici . Trichoderma aggressivum f. europaeum, T . longibrachiatum , Paecilomyces variotii , and T . saturnisporum could be used to control of P . capsici in pepper . The epiphytic yeasts from Piperaceae had multiple antagonistic mechanisms contained yeasts volatile organic compound production, hyperparasitism, and the production of β-1,3-glucanase enzyme . Five Trichoderma strains , such as T . harzianum , T . longibranchiatum , T . yunnanense , T . asperellum (T2-10 and T2-31) and Trichoderma sp., were excellent potential agents for controlling P . capsici . Pseudomonas species have been widely employed for the biological control of P . capsici . The biocontrol mechanisms of Pseudomonas encompass competitive exclusion for space and nutrients, siderophore production (iron chelation), secretion of catabolic enzymes and secondary metabolism products, and induction of systemic resistance in host plants . For instance, Pseudomonas otitidis YJR27 and P . putida YJR92 can inhibit P . capsici and manage Phytophthora blight in pepper plants . P . plecoglossicida YJR13 and P . putida YJR92 not only effectively hindered mycelial growth, zoospore germination, and germ tube elongation of P . capsici but also colonized pepper roots through cell motility, biofilm formation, and chemotaxis towards root exudates . Additionally, Pseudomonas species strains markedly inhibited sporangia formation, zoospore release, and mycelial growth in liquid culture . Beneficial biocontrol bacteria are abundant in the rhizosphere of plants, aiding in plant resistance against pathogenic infections, enhancing nutrient absorption, and promoting plant growth . To screen for biocontrol bacteria with plant growth-promoting attributes and to obtain low molecular weight substances from bacteria for controlling P . capsici , we tested a strain of P . aeruginosa isolated from the rhizosphere of pepper. In this study, the strain of P . aeruginosa demonstrated significant biocontrol efficacy against P . capsici both in vivo and in vitro . Moreover, the P . aeruginosa easily colonized to rhizosphere of pepper and highly suppress pepper blight in filed. Notably, the α-pinene produced by P . aeruginosa exhibited anti-oomycete activity. These findings offer a promising avenue for developing novel methods to prevent pepper blight caused by P . capsici .
P . capsici isolation and identification P . capsici was isolated from a diseased pepper plant with blight, cultured on potato dextrose agar (PDA) medium at 28°C, and identified through PCR amplification utilizing primers (ITS1 and ITS4) targeting the internal transcribed spacer sequence . Isolation and screening for biocontrol bacteria The procedure followed the method with slight modifications . For the isolation of biocontrol bacteria, the surface soil was first removed, followed by collection of soil from a depth of 5–10 cm in the rhizosphere of 10 pepper plants heavily affected by P . capsici in Gaoqiao Town, Changsha County, Hunan Province (113.33°E, 28.44°N). Ten-gram soil samples were suspended in 90 ml of sterile water, agitated for 30 minutes, and then serially diluted 10,000 times. Subsequently, 100 μl of the suspensions with the highest dilution were spread on Luria-Bertani agar (LB) plates and incubated at 28°C for 48 hours until bacterial colonies appeared. Single colonies were then selected, purified on LB agar plates for 3 days, and subsequently cultured in 500 ml liquid LB medium for 48 hours. To screen for biocontrol bacteria, a 5 mm diameter P . capsici disc was placed at the center of a PDA plate. Next, 5 μl of each of the four bacterial suspensions were inoculated 2 cm away from the disc in a crisscross pattern on the agar plate. The plates were then incubated for 7 days to observe the inhibition of mycelial growth. Bacteria exhibiting anti-oomycetes activity were chosen for further validation. To further validate the biocontrol activity, the candidate bacterium was streaked horizontally on the left of a PDA medium and incubated at 28°C for 48 h to obtain bacterial growth. After bacterial growth, a 5 mm diameter disc of P . capsici was placed on the right of the same plate. The bacterium was 2cm from the disc of P . capsici . Plates with both bacteria and P . capsici were incubated for at 28°C. Each treatment repeated 3 times. When mycelium s of P . capsici on the right of the plate (no bacterium) grow up to the edge of plate, the biocontrol activity of the bacterium was assessed comparing the inhibition of mycelium expansion in the presence of the bacterium strain, and measuring the mycelium radius in the direction of the bacterium. For each plate we calculated the average radius of the mycelia using the following formula: the rate of inhibition of mycelium growth = (Rb-Rc) / Rb, where Rb was the mycelium s radius in the opposite direction of the bacterium; where Rc was the mycelium radius in the direction of the bacterium. The bacterium with the highest anti-oomycete activity was analyzed further. Taxonomic identification of strain Pa608 Morphological characters and 16S rRNA sequencing A bacterial strain exhibiting the highest anti-oomycete activity was designated as Pa608. The purified bacterium was cultivated on nutrient agar (NA) and LB medium for 3 days at 28°C, followed by an assessment of colony morphology and color. The Gram characteristics of the strain Pa608 were determined using a Gram stain kit. Subsequently, the shape and size of the strain Pa608 were examined through scanning electron microscopy. For the molecular identification of the strain Pa608, colony PCR was employed to amplify the 16S rRNA sequences using primers 27F and 1492R . The 20 μL reaction mixture contained approximately 50 ng of total DNA, 5 mM each of dNTPs, 20 pmol each of both forward and reverse primers, and 0.5 U of Taq DNA polymerase (TransGen Biotech Co., Ltd., Beijing, China). PCR amplification was performed in a thermocycler applying the conditions: Denaturation for 1 min at 94°C; Annealing for 45 sec at 56°C; Extension for 1 min at 72°C; Final extension for 10 min at 72°C. The PCR products were visualized through agarose gel electrophoresis and then sequenced by Beijing Qingke Biotechnology Co., Ltd. The obtained sequences were manually curated and compared against the National Center for Biotechnology Information (NCBI) database using BLASTn to identify the most closely related bacterial species. A phylogenetic tree encompassing the strain Pa608 and seven other bacteria within the genus Pseudomonas was constructed using the neighbor-joining algorithm with 1000 bootstrap replicates in MEGA7 . Extracellular enzyme characteristic Protease, cellulase, amylase and phosphate solubility activities of the strain Pa608 were assessed on LB agar medium supplemented with 3% skim milk powder, carboxymethyl cellulose agar medium (K₂HPO₄ 2.5g, Na₂HPO₄ 2.5g, Carboxymethylcellulose sodium 20.0g, Peptone 2.0g, Yeast Extract 0.5g, Agar 14.0g), 1% starch-pancreatic soy agar (Trypticase 15.0g, Enzymatic digest of soybean meal 5.0g, NaCl 5.0g, Soluble Starch 3.0g; Agar 15.0g), and a phosphate solubilization medium (Glucose, 10.0 g; KH 2 PO 4 , 10.9 g; (NH 4 ) 2 SO 4 , 1.0 g; MgSO 4 •7H 2 O, 0.16 g; FeSO 4 •7H 2 O, 0.005 g; CaCl 2 •2H 2 O, 0.011 g; MnCl 2 •4H 2 O, 0.002 g; Agar 14.0g), respectively. The plates were incubated for 3 days at 28°C, and the characteristics of extracellular enzymes were investigated by measuring the transparent zones. Inhibition effect of strain Pa608 on pathogens The antimicrobial spectrum of the strain Pa608 was assessed against some plant pathogens, including Sclerotinia sclerotiorum , Pyricularia oryzae , Diaporthe citri , Botrytis cinerea , Fusarium graminearum , and Penicillium simplicissimum . The confrontational culture method, as described earlier, was used to determine the width of the inhibitory zone, which was observed and quantified. Pot experiment The pepper variety Zhongke M105 f1 was selected for the pot experiment. The pepper seeds were disinfected with 0.1% sodium hypochlorite and thoroughly rinsed with distilled water three times to remove any sodium hypochlorite residue. The treated seeds were then placed on filter paper in a petri dish until germination occurred. Once sprouts emerged, they were transplanted 2 cm deep into soil in a 10-cm wide pot, with one plant per pot. The planting medium consisted of a 2:1 mixture of Walga horticultural nutrient soil and vermiculite. Plants were allowed to grow to the eight-leaf stage, and only robust and uniformly developed plants were selected for subsequent experiments. To prepare the sporangia solution, P . capsici was first inoculated on oatmeal agar and incubated at 25°C for seven days, then transferred to 28°C for 48 hours under continuous illumination to induce sporangia formation. Subsequently, the sporangia were harvested from the agar surface using a brush . The collected culture was then left at room temperature for three hours to encourage zoospore release, and subsequently diluted with distilled water to achieve an inoculation concentration of 2.0×10 4 zoospores/ml. For the strain Pa608, inoculation was carried out in 100 mL LB medium and incubated at 28°C, 150 rpm, for 24 hours to achieve bacterial suspensions with a concentration of 1.0 ×10 8 cells/ml, measured by a spectrophotometer. Three treatments were implemented in the experiment. Treatment 1: Inject 3 ml of sterile water as a control. Treatment 2: inject 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml) 1.5 cm deep into the soil , maintaining a distance from the plants to avoid direct contact. Treatment 3: Inject 3 mL of the strain Pa608 bacterial suspension (concentration: 1.0×10 8 cells/ml) into the soil followed by 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml). Each treatment contains 12 pots of peppers, with 3 pepper plants in each pot. Water regularly, maintain high temperature and humidity to promote disease development. Disease severity (DI) was evaluated ten days post-inoculation utilizing a rating scale ranging from 0 to 5 , and the DI was determined using the formula : D I = ( Σ ( s × n ) / ( N × S ) ) × 100 where DI represented the disease index, s denoted the scale rating, n was the number of plants at a specific scale rating, N was the total number of evaluated plants, and S was the maximum scale rating. Colonization dynamics of the strain Pa608 in pepper rhizosphere soil Total of 6 pots of pepper plants were divided into 2 treatments, with 3 plants in each treatment. Treatment 1 involved injecting 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml) into the soil at the base of the pepper plants , followed by 3 mL of the strain Pa608 bacterial suspension (1.0 ×10 8 cells/ml). Treatment 2 involved injecting 3 mL of the strain Pa608 bacterial suspension into the soil at the base of the pepper plants. Soil samples from the pots were collected on days 1, 3, 5, 10, 15, 30, and 45. The soil samples were subjected to gradient dilution method to detect the quantity of the strain Pa608. Pseudomonas agar medium was used as the culture medium , diluted by a factor of 1, 000, and after incubating at 30°C for 2 days, the colonies were observed for color (blue-green) and counted. Field experiment The pepper greenhouse experiment was being conducted at the Vegetable Research Institute base of Hunan Agricultural Sciences Academy in Gaoqiao Town, Changsha County, Hunan Province. In previous years, the greenhouse has suffered from severe disease outbreaks, with an incidence rate of over 90%. The ridges were 1.2 meters wide, with 2 rows per ridge, a plant spacing of 35 cm, and a furrow width of 30 cm. There were 2 treatments: Treatment 1 was the blank control, with each pepper plant root irrigated with 50 mL of sterile water; Treatment 2 involved irrigating each pepper plant root with 50 mL of the strain Pa608 bacterial suspension (1.0 ×10 8 cells/ml). The pepper seedlings were treated 3 days after transplanting, followed by two additional treatments in early June and early July, totaling 3 treatments. Harvesting and recording of plant height, yield and disease incidence were scheduled for July 28th. The disease index was classified according to Sunwoo’s standards as previously mentioned . GC-MS analysis of the metabolites produced by the strain Pa608 The fermentation broth of the strain Pa608 was extracted using ethyl acetate (48 hours of fermentation). The ratio of organic solvent to fermentation broth was 1:1, with an extraction time of 8 hours, repeated three times. The extract was concentrated to a paste at 40°C using a rotary evaporator and stored at 4°C for future use. The gas chromatography-mass spectrometry (GC-MS) was used to analyze the compound composition of the fermentation broth. The GC-MS analysis conditions were selected following the method , with adjustments made to the column temperature ramping up to 300°C at a rate of 10°C/min, hold for 40 min, using helium as the carrier gas with a split ratio set at 20:1, inject 1 μL, and set the injector temperature at 325°C. The mass spectrometry analysis conditions followed the method , with adjustments to stabilize the ion source at 280°C and scan the range from 33 to 600 m/z. The components of volatile compounds were identified by comparing retention times using the NIST mass spectral library. The relative contents were determined based on the percentage of peak areas of different compounds. A structural analysis of volatile organic compounds with antimicrobial potential was conducted based on relevant literature. The data were analyzed using the NIST Mass Spectral Library version 8 (NIST08.L) to match the acquired mass spectra, qualitatively analyze compounds based on matching factors, and retrieve pertinent information for each compound. Biocontrol activity assessment of metabolites produced by the strain Pa608 The compounds with significant peak areas from the GC-MS results, particularly those previously recognized for their antimicrobial properties, were chosen as potential anti-oomycete substances. The pure form of these selected compounds was acquired and their inhibitory activity against P . capsici was evaluated using the growth rate inhibition method. The final concentrations of the candidate compounds in the PDA medium were 250 mg/L, 50 mg/L, 10 mg/L, and 5 mg/L, with an equal volume of sterile water serving as a control. One 5 mm diameter disc of P . capsici was positioned at the center of each petri dish and then incubated at a constant temperature of 25°C for 7 days. The procedure replicated three times. The inhibitory rate can be calculated using the formula : inhibitory rate = 100% * ((colony diameter in control—disc diameter)—(colony diameter in treatment—disc diameter)) / (colony diameter in control—disc diameter).
. capsici isolation and identification P . capsici was isolated from a diseased pepper plant with blight, cultured on potato dextrose agar (PDA) medium at 28°C, and identified through PCR amplification utilizing primers (ITS1 and ITS4) targeting the internal transcribed spacer sequence .
The procedure followed the method with slight modifications . For the isolation of biocontrol bacteria, the surface soil was first removed, followed by collection of soil from a depth of 5–10 cm in the rhizosphere of 10 pepper plants heavily affected by P . capsici in Gaoqiao Town, Changsha County, Hunan Province (113.33°E, 28.44°N). Ten-gram soil samples were suspended in 90 ml of sterile water, agitated for 30 minutes, and then serially diluted 10,000 times. Subsequently, 100 μl of the suspensions with the highest dilution were spread on Luria-Bertani agar (LB) plates and incubated at 28°C for 48 hours until bacterial colonies appeared. Single colonies were then selected, purified on LB agar plates for 3 days, and subsequently cultured in 500 ml liquid LB medium for 48 hours. To screen for biocontrol bacteria, a 5 mm diameter P . capsici disc was placed at the center of a PDA plate. Next, 5 μl of each of the four bacterial suspensions were inoculated 2 cm away from the disc in a crisscross pattern on the agar plate. The plates were then incubated for 7 days to observe the inhibition of mycelial growth. Bacteria exhibiting anti-oomycetes activity were chosen for further validation. To further validate the biocontrol activity, the candidate bacterium was streaked horizontally on the left of a PDA medium and incubated at 28°C for 48 h to obtain bacterial growth. After bacterial growth, a 5 mm diameter disc of P . capsici was placed on the right of the same plate. The bacterium was 2cm from the disc of P . capsici . Plates with both bacteria and P . capsici were incubated for at 28°C. Each treatment repeated 3 times. When mycelium s of P . capsici on the right of the plate (no bacterium) grow up to the edge of plate, the biocontrol activity of the bacterium was assessed comparing the inhibition of mycelium expansion in the presence of the bacterium strain, and measuring the mycelium radius in the direction of the bacterium. For each plate we calculated the average radius of the mycelia using the following formula: the rate of inhibition of mycelium growth = (Rb-Rc) / Rb, where Rb was the mycelium s radius in the opposite direction of the bacterium; where Rc was the mycelium radius in the direction of the bacterium. The bacterium with the highest anti-oomycete activity was analyzed further.
Morphological characters and 16S rRNA sequencing A bacterial strain exhibiting the highest anti-oomycete activity was designated as Pa608. The purified bacterium was cultivated on nutrient agar (NA) and LB medium for 3 days at 28°C, followed by an assessment of colony morphology and color. The Gram characteristics of the strain Pa608 were determined using a Gram stain kit. Subsequently, the shape and size of the strain Pa608 were examined through scanning electron microscopy. For the molecular identification of the strain Pa608, colony PCR was employed to amplify the 16S rRNA sequences using primers 27F and 1492R . The 20 μL reaction mixture contained approximately 50 ng of total DNA, 5 mM each of dNTPs, 20 pmol each of both forward and reverse primers, and 0.5 U of Taq DNA polymerase (TransGen Biotech Co., Ltd., Beijing, China). PCR amplification was performed in a thermocycler applying the conditions: Denaturation for 1 min at 94°C; Annealing for 45 sec at 56°C; Extension for 1 min at 72°C; Final extension for 10 min at 72°C. The PCR products were visualized through agarose gel electrophoresis and then sequenced by Beijing Qingke Biotechnology Co., Ltd. The obtained sequences were manually curated and compared against the National Center for Biotechnology Information (NCBI) database using BLASTn to identify the most closely related bacterial species. A phylogenetic tree encompassing the strain Pa608 and seven other bacteria within the genus Pseudomonas was constructed using the neighbor-joining algorithm with 1000 bootstrap replicates in MEGA7 . Extracellular enzyme characteristic Protease, cellulase, amylase and phosphate solubility activities of the strain Pa608 were assessed on LB agar medium supplemented with 3% skim milk powder, carboxymethyl cellulose agar medium (K₂HPO₄ 2.5g, Na₂HPO₄ 2.5g, Carboxymethylcellulose sodium 20.0g, Peptone 2.0g, Yeast Extract 0.5g, Agar 14.0g), 1% starch-pancreatic soy agar (Trypticase 15.0g, Enzymatic digest of soybean meal 5.0g, NaCl 5.0g, Soluble Starch 3.0g; Agar 15.0g), and a phosphate solubilization medium (Glucose, 10.0 g; KH 2 PO 4 , 10.9 g; (NH 4 ) 2 SO 4 , 1.0 g; MgSO 4 •7H 2 O, 0.16 g; FeSO 4 •7H 2 O, 0.005 g; CaCl 2 •2H 2 O, 0.011 g; MnCl 2 •4H 2 O, 0.002 g; Agar 14.0g), respectively. The plates were incubated for 3 days at 28°C, and the characteristics of extracellular enzymes were investigated by measuring the transparent zones. Inhibition effect of strain Pa608 on pathogens The antimicrobial spectrum of the strain Pa608 was assessed against some plant pathogens, including Sclerotinia sclerotiorum , Pyricularia oryzae , Diaporthe citri , Botrytis cinerea , Fusarium graminearum , and Penicillium simplicissimum . The confrontational culture method, as described earlier, was used to determine the width of the inhibitory zone, which was observed and quantified. Pot experiment The pepper variety Zhongke M105 f1 was selected for the pot experiment. The pepper seeds were disinfected with 0.1% sodium hypochlorite and thoroughly rinsed with distilled water three times to remove any sodium hypochlorite residue. The treated seeds were then placed on filter paper in a petri dish until germination occurred. Once sprouts emerged, they were transplanted 2 cm deep into soil in a 10-cm wide pot, with one plant per pot. The planting medium consisted of a 2:1 mixture of Walga horticultural nutrient soil and vermiculite. Plants were allowed to grow to the eight-leaf stage, and only robust and uniformly developed plants were selected for subsequent experiments. To prepare the sporangia solution, P . capsici was first inoculated on oatmeal agar and incubated at 25°C for seven days, then transferred to 28°C for 48 hours under continuous illumination to induce sporangia formation. Subsequently, the sporangia were harvested from the agar surface using a brush . The collected culture was then left at room temperature for three hours to encourage zoospore release, and subsequently diluted with distilled water to achieve an inoculation concentration of 2.0×10 4 zoospores/ml. For the strain Pa608, inoculation was carried out in 100 mL LB medium and incubated at 28°C, 150 rpm, for 24 hours to achieve bacterial suspensions with a concentration of 1.0 ×10 8 cells/ml, measured by a spectrophotometer. Three treatments were implemented in the experiment. Treatment 1: Inject 3 ml of sterile water as a control. Treatment 2: inject 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml) 1.5 cm deep into the soil , maintaining a distance from the plants to avoid direct contact. Treatment 3: Inject 3 mL of the strain Pa608 bacterial suspension (concentration: 1.0×10 8 cells/ml) into the soil followed by 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml). Each treatment contains 12 pots of peppers, with 3 pepper plants in each pot. Water regularly, maintain high temperature and humidity to promote disease development. Disease severity (DI) was evaluated ten days post-inoculation utilizing a rating scale ranging from 0 to 5 , and the DI was determined using the formula : D I = ( Σ ( s × n ) / ( N × S ) ) × 100 where DI represented the disease index, s denoted the scale rating, n was the number of plants at a specific scale rating, N was the total number of evaluated plants, and S was the maximum scale rating. Colonization dynamics of the strain Pa608 in pepper rhizosphere soil Total of 6 pots of pepper plants were divided into 2 treatments, with 3 plants in each treatment. Treatment 1 involved injecting 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml) into the soil at the base of the pepper plants , followed by 3 mL of the strain Pa608 bacterial suspension (1.0 ×10 8 cells/ml). Treatment 2 involved injecting 3 mL of the strain Pa608 bacterial suspension into the soil at the base of the pepper plants. Soil samples from the pots were collected on days 1, 3, 5, 10, 15, 30, and 45. The soil samples were subjected to gradient dilution method to detect the quantity of the strain Pa608. Pseudomonas agar medium was used as the culture medium , diluted by a factor of 1, 000, and after incubating at 30°C for 2 days, the colonies were observed for color (blue-green) and counted. Field experiment The pepper greenhouse experiment was being conducted at the Vegetable Research Institute base of Hunan Agricultural Sciences Academy in Gaoqiao Town, Changsha County, Hunan Province. In previous years, the greenhouse has suffered from severe disease outbreaks, with an incidence rate of over 90%. The ridges were 1.2 meters wide, with 2 rows per ridge, a plant spacing of 35 cm, and a furrow width of 30 cm. There were 2 treatments: Treatment 1 was the blank control, with each pepper plant root irrigated with 50 mL of sterile water; Treatment 2 involved irrigating each pepper plant root with 50 mL of the strain Pa608 bacterial suspension (1.0 ×10 8 cells/ml). The pepper seedlings were treated 3 days after transplanting, followed by two additional treatments in early June and early July, totaling 3 treatments. Harvesting and recording of plant height, yield and disease incidence were scheduled for July 28th. The disease index was classified according to Sunwoo’s standards as previously mentioned . GC-MS analysis of the metabolites produced by the strain Pa608 The fermentation broth of the strain Pa608 was extracted using ethyl acetate (48 hours of fermentation). The ratio of organic solvent to fermentation broth was 1:1, with an extraction time of 8 hours, repeated three times. The extract was concentrated to a paste at 40°C using a rotary evaporator and stored at 4°C for future use. The gas chromatography-mass spectrometry (GC-MS) was used to analyze the compound composition of the fermentation broth. The GC-MS analysis conditions were selected following the method , with adjustments made to the column temperature ramping up to 300°C at a rate of 10°C/min, hold for 40 min, using helium as the carrier gas with a split ratio set at 20:1, inject 1 μL, and set the injector temperature at 325°C. The mass spectrometry analysis conditions followed the method , with adjustments to stabilize the ion source at 280°C and scan the range from 33 to 600 m/z. The components of volatile compounds were identified by comparing retention times using the NIST mass spectral library. The relative contents were determined based on the percentage of peak areas of different compounds. A structural analysis of volatile organic compounds with antimicrobial potential was conducted based on relevant literature. The data were analyzed using the NIST Mass Spectral Library version 8 (NIST08.L) to match the acquired mass spectra, qualitatively analyze compounds based on matching factors, and retrieve pertinent information for each compound. Biocontrol activity assessment of metabolites produced by the strain Pa608 The compounds with significant peak areas from the GC-MS results, particularly those previously recognized for their antimicrobial properties, were chosen as potential anti-oomycete substances. The pure form of these selected compounds was acquired and their inhibitory activity against P . capsici was evaluated using the growth rate inhibition method. The final concentrations of the candidate compounds in the PDA medium were 250 mg/L, 50 mg/L, 10 mg/L, and 5 mg/L, with an equal volume of sterile water serving as a control. One 5 mm diameter disc of P . capsici was positioned at the center of each petri dish and then incubated at a constant temperature of 25°C for 7 days. The procedure replicated three times. The inhibitory rate can be calculated using the formula : inhibitory rate = 100% * ((colony diameter in control—disc diameter)—(colony diameter in treatment—disc diameter)) / (colony diameter in control—disc diameter).
A bacterial strain exhibiting the highest anti-oomycete activity was designated as Pa608. The purified bacterium was cultivated on nutrient agar (NA) and LB medium for 3 days at 28°C, followed by an assessment of colony morphology and color. The Gram characteristics of the strain Pa608 were determined using a Gram stain kit. Subsequently, the shape and size of the strain Pa608 were examined through scanning electron microscopy. For the molecular identification of the strain Pa608, colony PCR was employed to amplify the 16S rRNA sequences using primers 27F and 1492R . The 20 μL reaction mixture contained approximately 50 ng of total DNA, 5 mM each of dNTPs, 20 pmol each of both forward and reverse primers, and 0.5 U of Taq DNA polymerase (TransGen Biotech Co., Ltd., Beijing, China). PCR amplification was performed in a thermocycler applying the conditions: Denaturation for 1 min at 94°C; Annealing for 45 sec at 56°C; Extension for 1 min at 72°C; Final extension for 10 min at 72°C. The PCR products were visualized through agarose gel electrophoresis and then sequenced by Beijing Qingke Biotechnology Co., Ltd. The obtained sequences were manually curated and compared against the National Center for Biotechnology Information (NCBI) database using BLASTn to identify the most closely related bacterial species. A phylogenetic tree encompassing the strain Pa608 and seven other bacteria within the genus Pseudomonas was constructed using the neighbor-joining algorithm with 1000 bootstrap replicates in MEGA7 .
Protease, cellulase, amylase and phosphate solubility activities of the strain Pa608 were assessed on LB agar medium supplemented with 3% skim milk powder, carboxymethyl cellulose agar medium (K₂HPO₄ 2.5g, Na₂HPO₄ 2.5g, Carboxymethylcellulose sodium 20.0g, Peptone 2.0g, Yeast Extract 0.5g, Agar 14.0g), 1% starch-pancreatic soy agar (Trypticase 15.0g, Enzymatic digest of soybean meal 5.0g, NaCl 5.0g, Soluble Starch 3.0g; Agar 15.0g), and a phosphate solubilization medium (Glucose, 10.0 g; KH 2 PO 4 , 10.9 g; (NH 4 ) 2 SO 4 , 1.0 g; MgSO 4 •7H 2 O, 0.16 g; FeSO 4 •7H 2 O, 0.005 g; CaCl 2 •2H 2 O, 0.011 g; MnCl 2 •4H 2 O, 0.002 g; Agar 14.0g), respectively. The plates were incubated for 3 days at 28°C, and the characteristics of extracellular enzymes were investigated by measuring the transparent zones.
The antimicrobial spectrum of the strain Pa608 was assessed against some plant pathogens, including Sclerotinia sclerotiorum , Pyricularia oryzae , Diaporthe citri , Botrytis cinerea , Fusarium graminearum , and Penicillium simplicissimum . The confrontational culture method, as described earlier, was used to determine the width of the inhibitory zone, which was observed and quantified.
The pepper variety Zhongke M105 f1 was selected for the pot experiment. The pepper seeds were disinfected with 0.1% sodium hypochlorite and thoroughly rinsed with distilled water three times to remove any sodium hypochlorite residue. The treated seeds were then placed on filter paper in a petri dish until germination occurred. Once sprouts emerged, they were transplanted 2 cm deep into soil in a 10-cm wide pot, with one plant per pot. The planting medium consisted of a 2:1 mixture of Walga horticultural nutrient soil and vermiculite. Plants were allowed to grow to the eight-leaf stage, and only robust and uniformly developed plants were selected for subsequent experiments. To prepare the sporangia solution, P . capsici was first inoculated on oatmeal agar and incubated at 25°C for seven days, then transferred to 28°C for 48 hours under continuous illumination to induce sporangia formation. Subsequently, the sporangia were harvested from the agar surface using a brush . The collected culture was then left at room temperature for three hours to encourage zoospore release, and subsequently diluted with distilled water to achieve an inoculation concentration of 2.0×10 4 zoospores/ml. For the strain Pa608, inoculation was carried out in 100 mL LB medium and incubated at 28°C, 150 rpm, for 24 hours to achieve bacterial suspensions with a concentration of 1.0 ×10 8 cells/ml, measured by a spectrophotometer. Three treatments were implemented in the experiment. Treatment 1: Inject 3 ml of sterile water as a control. Treatment 2: inject 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml) 1.5 cm deep into the soil , maintaining a distance from the plants to avoid direct contact. Treatment 3: Inject 3 mL of the strain Pa608 bacterial suspension (concentration: 1.0×10 8 cells/ml) into the soil followed by 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml). Each treatment contains 12 pots of peppers, with 3 pepper plants in each pot. Water regularly, maintain high temperature and humidity to promote disease development. Disease severity (DI) was evaluated ten days post-inoculation utilizing a rating scale ranging from 0 to 5 , and the DI was determined using the formula : D I = ( Σ ( s × n ) / ( N × S ) ) × 100 where DI represented the disease index, s denoted the scale rating, n was the number of plants at a specific scale rating, N was the total number of evaluated plants, and S was the maximum scale rating.
Total of 6 pots of pepper plants were divided into 2 treatments, with 3 plants in each treatment. Treatment 1 involved injecting 3 mL of Phytophthora zoospore suspension (2.0×10 4 zoospores/ml) into the soil at the base of the pepper plants , followed by 3 mL of the strain Pa608 bacterial suspension (1.0 ×10 8 cells/ml). Treatment 2 involved injecting 3 mL of the strain Pa608 bacterial suspension into the soil at the base of the pepper plants. Soil samples from the pots were collected on days 1, 3, 5, 10, 15, 30, and 45. The soil samples were subjected to gradient dilution method to detect the quantity of the strain Pa608. Pseudomonas agar medium was used as the culture medium , diluted by a factor of 1, 000, and after incubating at 30°C for 2 days, the colonies were observed for color (blue-green) and counted.
The pepper greenhouse experiment was being conducted at the Vegetable Research Institute base of Hunan Agricultural Sciences Academy in Gaoqiao Town, Changsha County, Hunan Province. In previous years, the greenhouse has suffered from severe disease outbreaks, with an incidence rate of over 90%. The ridges were 1.2 meters wide, with 2 rows per ridge, a plant spacing of 35 cm, and a furrow width of 30 cm. There were 2 treatments: Treatment 1 was the blank control, with each pepper plant root irrigated with 50 mL of sterile water; Treatment 2 involved irrigating each pepper plant root with 50 mL of the strain Pa608 bacterial suspension (1.0 ×10 8 cells/ml). The pepper seedlings were treated 3 days after transplanting, followed by two additional treatments in early June and early July, totaling 3 treatments. Harvesting and recording of plant height, yield and disease incidence were scheduled for July 28th. The disease index was classified according to Sunwoo’s standards as previously mentioned .
The fermentation broth of the strain Pa608 was extracted using ethyl acetate (48 hours of fermentation). The ratio of organic solvent to fermentation broth was 1:1, with an extraction time of 8 hours, repeated three times. The extract was concentrated to a paste at 40°C using a rotary evaporator and stored at 4°C for future use. The gas chromatography-mass spectrometry (GC-MS) was used to analyze the compound composition of the fermentation broth. The GC-MS analysis conditions were selected following the method , with adjustments made to the column temperature ramping up to 300°C at a rate of 10°C/min, hold for 40 min, using helium as the carrier gas with a split ratio set at 20:1, inject 1 μL, and set the injector temperature at 325°C. The mass spectrometry analysis conditions followed the method , with adjustments to stabilize the ion source at 280°C and scan the range from 33 to 600 m/z. The components of volatile compounds were identified by comparing retention times using the NIST mass spectral library. The relative contents were determined based on the percentage of peak areas of different compounds. A structural analysis of volatile organic compounds with antimicrobial potential was conducted based on relevant literature. The data were analyzed using the NIST Mass Spectral Library version 8 (NIST08.L) to match the acquired mass spectra, qualitatively analyze compounds based on matching factors, and retrieve pertinent information for each compound.
The compounds with significant peak areas from the GC-MS results, particularly those previously recognized for their antimicrobial properties, were chosen as potential anti-oomycete substances. The pure form of these selected compounds was acquired and their inhibitory activity against P . capsici was evaluated using the growth rate inhibition method. The final concentrations of the candidate compounds in the PDA medium were 250 mg/L, 50 mg/L, 10 mg/L, and 5 mg/L, with an equal volume of sterile water serving as a control. One 5 mm diameter disc of P . capsici was positioned at the center of each petri dish and then incubated at a constant temperature of 25°C for 7 days. The procedure replicated three times. The inhibitory rate can be calculated using the formula : inhibitory rate = 100% * ((colony diameter in control—disc diameter)—(colony diameter in treatment—disc diameter)) / (colony diameter in control—disc diameter).
Taxonomic identification and characteristics of extracellular enzyme production Total of 209 bacterial strains were isolated from the rhizosphere soil of peppers, among which 23 strains exhibited antagonistic effects against P . capsici , including Pseudomonas, Bacillus and Burkholderia. The strain with the largest inhibition zone was designated as Pa608. After a two-day incubation period at 30°C on an NA plate, the strain Pa608 exhibited green colonies , while on an LB plate, strain Pa608 showed white, smooth colonies without visible pores . The Gram staining results were negative. Scanning electron microscopy observations displayed rod-shaped cells measuring approximately 1–5 μm in length and 0.5–1.0 μm in width . A phylogenetic tree of 16S rRNA sequences showed that the P . aeruginosa WZ2029 was in the same clade as the strain Pa608 with a percent identity of 100.00% , suggesting that they were closely related. Based on morphological characteristics and 16S rRNA sequences, the strain Pa608 was identified as P . aeruginosa . The strain Pa608 can produce protease and cellulose, forming transparent zones , however, no amylase secretion was detected . Moreover, it demonstrated phosphate-solubilizing ability by forming a phosphorus-solubilizing circle . Inhibition effect of the strain Pa608 on P . capsici and other plant pathogens The strain Pa608 demonstrated significant suppression of mycelial growth in P . capsici , forming a marked inhibition zone . Furthermore, it exhibited inhibitory effects to varying degrees on the growth of S . sclerotiorum , P . oryzae , Diaporthe citri , B . cinerea , F . graminearum , and P . simplicissimum . Pot experiment At 45 days after treatment, a significant portion of the pepper plants in the T2 group wilted, with incidence of 100% . Conversely, pepper plants in the T3 group showed no obvious symptoms, resulting in incidence of 12%. The disease index of T2 at 55 was markedly higher than that of T3 at 6.6 . The control efficiency reached 88.0%, indicating the effective suppression of P . capsici by the strain Pa608. Population dynamic of the strain Pa608 in pepper rhizosphere Within the first 15 days after inoculation, the population of the strain Pa608 in the pepper rhizosphere soil rapidly decreases. From day 15 to day 45, the population of the strain Pa608 declines slowly and tends to stabilize . After inoculation with the strain Pa608 alone, the population of Pa608 was slightly higher than the population of Pa608 in the rhizosphere when co-inoculated with P . capsici . This indicates that the strain Pa608 was capable of colonizing the pepper rhizosphere. Field experiment Following the application of the strain Pa608 bacterial suspension, the incidence rate for Treatment 2 was 48.9%, with disease index of 17.3. The rate was notably lower than that of the control group , resulting in control efficiencies of 74.9%. Moreover, the strain Pa608 demonstrated a growth-promoting effect on pepper plants, as seen in the significantly greater height of plants (19.96 cm) and yield (2611.02 g per plant) in Treatment 2 compared to the control group (15.90 cm and 1209.71 g per plant) . Furthermore, pepper plants treated with the strain Pa608 exhibited better health and lower mortality rates compared to those in the control group , where most pepper plants perished. GC-MS analysis The GC-MS analysis data of the sterile filtrate of the strain Pa608 revealed a total of 51 secondary metabolites with a similarity of over 70% . These substances were mainly categorized as alcohols, alkanes, ketones, esters, sesquiterpenes, and phenazines. Among them, 3-carene exhibited a matching factor as high as 88.3% with a peak area of 1,367,054.6. The α-pinene had a matching factor of 86.85% and a peak area of 1,427,453.5 in the detected substances. Biocontrol activity assessment of α-pinene and 3-carene The 3-carene and α-pinene exhibit anti-oomycetes activity against P . capsici . Compared to P . capsici in the control group , α-pinene inhibits P . capsici by 84.9% at a concentration of 5 mg/L , whereas 3-carene only achieves a 35.2% inhibition rate against the pathogen at the same concentration .
Total of 209 bacterial strains were isolated from the rhizosphere soil of peppers, among which 23 strains exhibited antagonistic effects against P . capsici , including Pseudomonas, Bacillus and Burkholderia. The strain with the largest inhibition zone was designated as Pa608. After a two-day incubation period at 30°C on an NA plate, the strain Pa608 exhibited green colonies , while on an LB plate, strain Pa608 showed white, smooth colonies without visible pores . The Gram staining results were negative. Scanning electron microscopy observations displayed rod-shaped cells measuring approximately 1–5 μm in length and 0.5–1.0 μm in width . A phylogenetic tree of 16S rRNA sequences showed that the P . aeruginosa WZ2029 was in the same clade as the strain Pa608 with a percent identity of 100.00% , suggesting that they were closely related. Based on morphological characteristics and 16S rRNA sequences, the strain Pa608 was identified as P . aeruginosa . The strain Pa608 can produce protease and cellulose, forming transparent zones , however, no amylase secretion was detected . Moreover, it demonstrated phosphate-solubilizing ability by forming a phosphorus-solubilizing circle .
P . capsici and other plant pathogens The strain Pa608 demonstrated significant suppression of mycelial growth in P . capsici , forming a marked inhibition zone . Furthermore, it exhibited inhibitory effects to varying degrees on the growth of S . sclerotiorum , P . oryzae , Diaporthe citri , B . cinerea , F . graminearum , and P . simplicissimum .
At 45 days after treatment, a significant portion of the pepper plants in the T2 group wilted, with incidence of 100% . Conversely, pepper plants in the T3 group showed no obvious symptoms, resulting in incidence of 12%. The disease index of T2 at 55 was markedly higher than that of T3 at 6.6 . The control efficiency reached 88.0%, indicating the effective suppression of P . capsici by the strain Pa608.
Within the first 15 days after inoculation, the population of the strain Pa608 in the pepper rhizosphere soil rapidly decreases. From day 15 to day 45, the population of the strain Pa608 declines slowly and tends to stabilize . After inoculation with the strain Pa608 alone, the population of Pa608 was slightly higher than the population of Pa608 in the rhizosphere when co-inoculated with P . capsici . This indicates that the strain Pa608 was capable of colonizing the pepper rhizosphere.
Following the application of the strain Pa608 bacterial suspension, the incidence rate for Treatment 2 was 48.9%, with disease index of 17.3. The rate was notably lower than that of the control group , resulting in control efficiencies of 74.9%. Moreover, the strain Pa608 demonstrated a growth-promoting effect on pepper plants, as seen in the significantly greater height of plants (19.96 cm) and yield (2611.02 g per plant) in Treatment 2 compared to the control group (15.90 cm and 1209.71 g per plant) . Furthermore, pepper plants treated with the strain Pa608 exhibited better health and lower mortality rates compared to those in the control group , where most pepper plants perished.
The GC-MS analysis data of the sterile filtrate of the strain Pa608 revealed a total of 51 secondary metabolites with a similarity of over 70% . These substances were mainly categorized as alcohols, alkanes, ketones, esters, sesquiterpenes, and phenazines. Among them, 3-carene exhibited a matching factor as high as 88.3% with a peak area of 1,367,054.6. The α-pinene had a matching factor of 86.85% and a peak area of 1,427,453.5 in the detected substances.
The 3-carene and α-pinene exhibit anti-oomycetes activity against P . capsici . Compared to P . capsici in the control group , α-pinene inhibits P . capsici by 84.9% at a concentration of 5 mg/L , whereas 3-carene only achieves a 35.2% inhibition rate against the pathogen at the same concentration .
The inhibitory activity of the strain Pa608 against P . capsici The utilization of soil-borne biocontrol microorganisms in the rhizosphere to suppress soil-borne pathogens is a significant research focus in sustainable agriculture, benefiting from ample available resources . In our investigation, the P . aeruginosa strain Pa608 exhibited notable inhibitory activity against P . capsici both in vitro and in vivo , consistent with previous findings on Pseudomonas in combating P . capsici . Pseudomonas exhibits a broad antimicrobial spectrum, capable of inhibiting a diverse range of plant pathogens, including fungi and oomycetes. Particularly, destructive species like P . cactorum , P . capsici , and P . infestans can also be suppressed by certain Pseudomonas species . In our research, we observed suppression of mycelial growth by the strain Pa608, consistent with findings from previous studies . However, the investigation did not include an assessment of the suppression of zoospore germination and germ tube elongation by the strain Pa608. It was possible to speculate that the strain Pa608 may also inhibit the germination of motile spores and germ tube elongation of Phytophthora in peppers, as strains of the same P . aeruginosa species have demonstrated similar activities . Our pot experiment showed that the strain Pa608 exhibited a high control efficiency against pepper Phytophthora blight at 88.0%, surpassing the control efficiency of 73.1% achieved by Streptomyces olivaceus and the 77% control efficiency reported for P . lini . Furthermore, the control efficiency in field trials by the strain Pa608 was 74.9%, equivalent to the field efficacy (75.16%) of the combination of XJC2-1 and the fungicide dimethomorph , suggesting its potential in field management. Moreover, the high control efficiency also reflected the stability of strain Pa608 in the pepper rhizosphere. However, the control efficiency of the strain Pa608 was lower than that of the Bacillus mixture at 88.0% . This indicates that multi-species combinations of microorganisms demonstrate higher efficacy in biological control compared to single strains. The trend in the development of biological control is to utilize artificially synthesized microbial communities to fully leverage the advantages of diverse microbial combinations, continuously and effectively exerting biological control . In this study, we successfully isolated Bacillus and Burkholderia from the rhizosphere of peppers. The next step involves screening these bacteria to construct a microbial community for biological control centered around the strain Pa608. Colonization characteristic of the strain Pa608 in pepper rhizosphere and their growth-promoting features The effectiveness of biological control primarily depends on the population density of biocontrol microorganisms that can easily colonize the plant rhizosphere and reproduce quickly, enabling them to exert long-lasting control effects . Consequently, biocontrol microorganisms with wide adaptability and easy colonization have become the preferred choice for researchers in plant protection. Pseudomonas spp. are widely distributed in the environment and have developed adaptive mechanisms to colonize a wide range of ecological niches, such as animal hosts, water environments and rhizosphere . In particular, in the plant rhizosphere, Pseudomonas can thrive by utilizing plant exudates, leading to their abundant proliferation. The strain Pa608 was isolated from pepper rhizosphere soil, demonstrating their colonization in soil. Moreover, the results of detection confirmed that the strain Pa608 could survive in the rhizosphere soil for up to 45 days. However, the population of the strain Pa608 declined after inoculation and stabilized at a later period, possibly due to the following reasons: under natural conditions, the number of Pa608 present in the soil was significantly lower than the number introduced through inoculation, resulting in the death of excess Pa608 due to limited nutrients. In the later stages of pepper growth, as pepper roots proliferate and exudates increase, they could nourish Pa608 and sustain their population size. The strain Pa608 demonstrated a growth-promoting effect on pepper plants, as evidenced by the significantly greater plant height and yield. Most biocontrol Pseudomonas species exhibit plant growth promotion activity, with mechanisms including phosphate solubilization and production of indole acetic acid . We found that the strain Pa608 possessed phosphate-solubilizing capability, which facilitates the dissolution of phosphorus, aiding in its uptake by peppers and indirectly promoting pepper growth. The production of indole acetic acid by the strain Pa608 needs further verification. Anti-oomycete activity of low molecular weight substances with volatility produced by the strain Pa608 P . aeruginosa is known to produce various secondary metabolites that play a crucial role in its virulence and interactions with other organisms . In our study, the strain Pa608 produced many secondary metabolites, including alcohols, alkanes, ketones, esters, sesquiterpenes, and phenazines. Different strains of P . aeruginosa may also produce similar substances, but there may be differences in the types and amounts. The types and quantities of secondary metabolites produced by P . aeruginosa are related to the genetic characteristics of the strain, the type of culture medium, and environmental conditions. Phenazine compounds are redox-active nitrogen-containing heterocyclic molecules that exhibit broad-spectrum antibiotic activity against various fungal, bacterial, and oomycete plant pathogens . For instance, phenazine-1-carboxylic acid, which is secreted by the P . aeruginosa strain GC-B26, has been shown to inhibit the growth of both P . capsici and C . orbiculare . In this research, 1,6-Dimethylphenazine and 1-Hydroxy-6-methylphenazine was detected from fermentation broth of the strain Pa608, indicating that P . capsici itself may produce certain types of phenazine compounds. The focus of our research was on low molecular weight substances with volatility, diverging from previous studies that centered on phenazines and cyclic lipopeptides . 3-carene is low molecular weight substance, possessed volatility and antimicrobial properties , which exhibited anti-oomycetes to some extent in our research. Particularly, α-pinene, secreted by the strain Pa608, demonstrated significant anti-oomycete activity with an 84.9% inhibition. Previous research has indicated that Burkholderia tropica produced α-pinene, which inhibits the growth of fungal pathogens such as Colletotrichum gloeosporioides , F . culmorum , F . oxysporum , and Sclerotium rolfsii and destructs fungal hyphae , suggesting the potential of α-pinene in developing novel fungicides.
P . capsici The utilization of soil-borne biocontrol microorganisms in the rhizosphere to suppress soil-borne pathogens is a significant research focus in sustainable agriculture, benefiting from ample available resources . In our investigation, the P . aeruginosa strain Pa608 exhibited notable inhibitory activity against P . capsici both in vitro and in vivo , consistent with previous findings on Pseudomonas in combating P . capsici . Pseudomonas exhibits a broad antimicrobial spectrum, capable of inhibiting a diverse range of plant pathogens, including fungi and oomycetes. Particularly, destructive species like P . cactorum , P . capsici , and P . infestans can also be suppressed by certain Pseudomonas species . In our research, we observed suppression of mycelial growth by the strain Pa608, consistent with findings from previous studies . However, the investigation did not include an assessment of the suppression of zoospore germination and germ tube elongation by the strain Pa608. It was possible to speculate that the strain Pa608 may also inhibit the germination of motile spores and germ tube elongation of Phytophthora in peppers, as strains of the same P . aeruginosa species have demonstrated similar activities . Our pot experiment showed that the strain Pa608 exhibited a high control efficiency against pepper Phytophthora blight at 88.0%, surpassing the control efficiency of 73.1% achieved by Streptomyces olivaceus and the 77% control efficiency reported for P . lini . Furthermore, the control efficiency in field trials by the strain Pa608 was 74.9%, equivalent to the field efficacy (75.16%) of the combination of XJC2-1 and the fungicide dimethomorph , suggesting its potential in field management. Moreover, the high control efficiency also reflected the stability of strain Pa608 in the pepper rhizosphere. However, the control efficiency of the strain Pa608 was lower than that of the Bacillus mixture at 88.0% . This indicates that multi-species combinations of microorganisms demonstrate higher efficacy in biological control compared to single strains. The trend in the development of biological control is to utilize artificially synthesized microbial communities to fully leverage the advantages of diverse microbial combinations, continuously and effectively exerting biological control . In this study, we successfully isolated Bacillus and Burkholderia from the rhizosphere of peppers. The next step involves screening these bacteria to construct a microbial community for biological control centered around the strain Pa608.
The effectiveness of biological control primarily depends on the population density of biocontrol microorganisms that can easily colonize the plant rhizosphere and reproduce quickly, enabling them to exert long-lasting control effects . Consequently, biocontrol microorganisms with wide adaptability and easy colonization have become the preferred choice for researchers in plant protection. Pseudomonas spp. are widely distributed in the environment and have developed adaptive mechanisms to colonize a wide range of ecological niches, such as animal hosts, water environments and rhizosphere . In particular, in the plant rhizosphere, Pseudomonas can thrive by utilizing plant exudates, leading to their abundant proliferation. The strain Pa608 was isolated from pepper rhizosphere soil, demonstrating their colonization in soil. Moreover, the results of detection confirmed that the strain Pa608 could survive in the rhizosphere soil for up to 45 days. However, the population of the strain Pa608 declined after inoculation and stabilized at a later period, possibly due to the following reasons: under natural conditions, the number of Pa608 present in the soil was significantly lower than the number introduced through inoculation, resulting in the death of excess Pa608 due to limited nutrients. In the later stages of pepper growth, as pepper roots proliferate and exudates increase, they could nourish Pa608 and sustain their population size. The strain Pa608 demonstrated a growth-promoting effect on pepper plants, as evidenced by the significantly greater plant height and yield. Most biocontrol Pseudomonas species exhibit plant growth promotion activity, with mechanisms including phosphate solubilization and production of indole acetic acid . We found that the strain Pa608 possessed phosphate-solubilizing capability, which facilitates the dissolution of phosphorus, aiding in its uptake by peppers and indirectly promoting pepper growth. The production of indole acetic acid by the strain Pa608 needs further verification.
P . aeruginosa is known to produce various secondary metabolites that play a crucial role in its virulence and interactions with other organisms . In our study, the strain Pa608 produced many secondary metabolites, including alcohols, alkanes, ketones, esters, sesquiterpenes, and phenazines. Different strains of P . aeruginosa may also produce similar substances, but there may be differences in the types and amounts. The types and quantities of secondary metabolites produced by P . aeruginosa are related to the genetic characteristics of the strain, the type of culture medium, and environmental conditions. Phenazine compounds are redox-active nitrogen-containing heterocyclic molecules that exhibit broad-spectrum antibiotic activity against various fungal, bacterial, and oomycete plant pathogens . For instance, phenazine-1-carboxylic acid, which is secreted by the P . aeruginosa strain GC-B26, has been shown to inhibit the growth of both P . capsici and C . orbiculare . In this research, 1,6-Dimethylphenazine and 1-Hydroxy-6-methylphenazine was detected from fermentation broth of the strain Pa608, indicating that P . capsici itself may produce certain types of phenazine compounds. The focus of our research was on low molecular weight substances with volatility, diverging from previous studies that centered on phenazines and cyclic lipopeptides . 3-carene is low molecular weight substance, possessed volatility and antimicrobial properties , which exhibited anti-oomycetes to some extent in our research. Particularly, α-pinene, secreted by the strain Pa608, demonstrated significant anti-oomycete activity with an 84.9% inhibition. Previous research has indicated that Burkholderia tropica produced α-pinene, which inhibits the growth of fungal pathogens such as Colletotrichum gloeosporioides , F . culmorum , F . oxysporum , and Sclerotium rolfsii and destructs fungal hyphae , suggesting the potential of α-pinene in developing novel fungicides.
Taken together, the strain P . aeruginosa Pa608 exhibits strong inhibitory activity against P . capsici , significantly reducing the pot and field incidence rate and increasing pepper height and yield. In particular, this strain can produce α-pinene, effectively inhibiting the growth of P . capsici . The next step is to further explore the mechanism of α-pinene against P . capsici and the synthesis pathway of α-pinene in the strain Pa608.
S1 Table Major substances in ethyl acetate extracts. (DOCX) S1 Fig Characteristics of extracellular enzyme production. A, protease. B, cellulose. C, amylase. D, phosphorylase. (TIF) S2 Fig Inhibition effect of the strain Pa608 on some pathogens. A, P . capsici . B, S . sclerotiorum . C, P . oryzae . D, Diaporthe citri . E, B . cinerea . F, F . graminearum . G, P . simplicissimum . (TIF)
|
The effects of physical activity on brain structure and neurophysiological functioning in children: A systematic review and meta-analysis | 1a692d04-d4f4-4311-bf2f-f2c2e03d3165 | 7451819 | Physiology[mh] | Introduction Physical activity has been associated with a range of physical, behavioral, cognitive and academic benefits . A growing body of literature indicates that the majority of the pediatric population comes not even close to the recommended 60 min of moderate intense physical activity per day for children . Moreover, the prevalence of a sedentary lifestyle among children is rapidly increasing . The evident lack of physical activity among children is especially worrisome in the light of existing evidence on beneficial effects of physical activity on brain development . The beneficial effects of physical activity on the brain are thought to have more long-lasting effects in childhood as compared to adulthood, suggesting that physical activity in childhood also contributes to brain functioning in adult life . In line with this idea, physical activity is also suggested as a potential treatment to improve brain development in pediatric clinical populations, such as children with depression or Attention Deficit Hyperactivity Disorder (ADHD) . For example, exercise intervention studies indicated beneficial effects on behavioral and cognitive symptoms of ADHD . Also, altered brain function and cognitive dysfunction have been found in obese children compared with leaner children . Recent studies have shown that exercise also has beneficial effects on cognition in this population . Nevertheless, to this date it remains largely unknown which underlying neural mechanisms give rise to the beneficial effects of physical activity in children. Findings from fundamental neuroscience have identified several pathways through which physical activity may act on brain structure and neurophysiological functioning. A single bout of physical activity (or short-term physical activity) has been shown to directly enhance cerebral blood flow and to trigger the upregulation of neurotransmitters that facilitate cognitive processes (e.g. epinephrine, dopamine; . These immediate effects resulting from a single bout of physical activity are often referred to as acute effects. Longer periods of continuous physical activity (long-term physical activity) are thought to trigger additional pathways that exert beneficial effects on brain development. Long-term physical activity has been shown to elevate the levels of neurotropic factors (e.g. brain-derived neurotropic factor and nerve growth factor), which are known to boost neural blood vessel formation and neurogenesis . These prolonged effects of long-term physical activity are often referred to as chronic effects. The observed acute and chronic effects indicate that physical activity is potent to change brain structure and neurophysiological functioning through differential mechanisms. In line with this evidence, previous studies in children have revealed associations between physical fitness - which is considered as an indirect measure of long-term physical activity - and brain structure as well as neurophysiological function. Regarding brain structure for example, cross-sectional magnetic resonance imaging (MRI) studies in 9−10-year-old children have shown that higher aerobic fitness is associated with larger brain volumes, including volumes of the basal ganglia and the bilateral hippocampus . Regarding neurophysiological functioning, a number of cross-sectional electroencephalography (EEG) studies in 9−10 year old children has shown that higher aerobic fitness is associated with greater allocation of attentional resources (as measured by the P3 component of the event-related potential) on tasks measuring interference control , cognitive flexibility , language processing and mathematical processing . Although these cross-sectional studies indicate an association between physical fitness and neural mechanisms and support the idea that long-term physical activity has beneficial effects on the child’s brain, these association studies do not provide causal evidence. Instead, intervention effectiveness studies, such as randomized controlled trials (RCTs) and cross-over trials, are necessary to evaluate causal effects. The current study aims to provide an overview of all available RCTs and cross-over trials testing the causal effects of physical activity on brain structure and neurophysiological functioning in children. Earlier reviews of studies on this topic did not use a systematic approach , or conclusions were (partly) based on studies using study designs that cannot provide evidence on causation (association studies or quasi-experimental designs) or did no attempt to quantify the effects . The mechanisms underlying the effects of physical activity on neuroimaging measures may be influenced by health status. Therefore, the current review and meta-analysis will make a distinction between studies in healthy and clinical samples of children. Changes in brain structure and neurophysiological functioning paralleled by changes in cognitive functioning potentially provide more insight into the mechanisms underlying the effects of physical activity. Hence, we provide a narrative review in which we determine whether reported changes in brain structure and neurophysiological functioning are accompanied by beneficial effects of physical activity on cognitive functioning, as reported by a correlation/regression analysis or coinciding positive effects of physical on neuroimaging and behavioral outcome measures. Where possible, we will quantify the magnitude of the effect of physical activity on brain structure and neurophysiological functioning using meta-analytic methods.
Methods 2.1 Study selection This systematic review and meta-analysis included empirical studies that: (1) used an RCT or cross-over design, (2) examined the effects of moderate to vigorous physical activity on brain structure and/or neurophysiological functioning, where moderate to vigorous physical activity was defined as physical activity that requires a moderate amount of effort and noticeably accelerates the heart rate , (3) included children with an average age between 5–12 years old (4) and included a no intervention control group (RCTs) or control condition (cross-over trials). The electronic databases PubMed, Embase, SportDiscus and Cochrane Library were searched combining search terms (MeSH and thesaurus terms) related to physical exercise and children, and Brain Imaging or electroencephalography and their equivalents (Table 1A, see Appendix; last search December 2019). The reference lists of all included articles were manually searched for additional relevant articles. This systematic review and meta-analysis was performed according to PRISMA guidelines . The article identification, screening and selection process was performed by two independent reviewers (AM + GV, ). The initial search retrieved 2275 unique articles, of which 37 articles were deemed relevant based on the screening of title and abstract. These 37 articles were further assessed for eligibility based on full-texts, after which 23 articles met all inclusion criteria. Two studies were excluded because of contaminating factors such as no sufficient intensity of the physical activity intervention (relaxation; ) or the assessment of neurophysiological functioning in relation to the processing of food stimuli . Finally, a total of 26 articles was included in the narrative review, of which 20 articles were suitable for meta-analysis. 2.2 Data extraction The following data were extracted from the included articles: (1) sample characteristics (for each study group: sample size, mean age and sex distribution); (2) intervention or control features (type, intensity and frequency of physical activity or control sessions); (3) outcome measures (imaging modality and cognitive tests assessed, if available). 2.3 Risk of bias assessment The quality of included studies was independently assessed by two authors (AM + GV) using the Cochrane Collaboration’s tool for risk of bias in randomized trials . This tool examines selection bias (random sequence generation and allocation concealment), performance bias (blinding participants and personnel), detection bias (blinding of outcome assessment), attrition bias (participants lost during study) and reporting bias (selective outcome reporting of prespecified outcome measures in methods sections or clinical trial registers). In addition, we evaluated all studies on sampling bias (representativeness of the sample for the targeted pediatric population). For each of these bias categories, studies were classified into low, unclear, or high risk of bias. Inter-rater discrepancies were resolved by consensus. 2.4 Statistical analysis Statistical analysis was performed using Comprehensive Meta-Analysis Software (CMA) - version 3 . Meta-analytic effect sizes were calculated separately for acute and chronic effects of physical activity. Further distinction was made between studies concerning (1) brain structure and (2) neurophysiological functioning, with further distinction between (3) children from healthy and (4) clinical samples. In addition, separate meta-analytic effects were calculated for subgroups of studies that reported identical outcome measures in at least two studies (e.g. P3 amplitude, P3 latency). To calculate meta-analytic effect sizes, individual studies’ effect sizes pertaining to each outcome measure were calculated using statistics describing the interaction effect between group and time for RCTs and between condition and session for cross-over trials (n, standard deviation, F value, p-value or the pre/post mean and standard deviation). Beneficial effects of physical activity on outcome measures (in the experimental group/condition as compared to the control group/condition) were expressed as positive effect sizes. Interpretation of neuroimaging measures is not always straightforward . Therefore, interpretation of the individual studies’ effect sizes (positive vs. negative) was based on the following sequential decision chain: (1) the interpretation of the authors, if supported by empirical evidence, (2) the direction of related cognitive effects, or (3) empirical evidence on the developmental course of the outcome measure, in which effects in the maturation direction were interpreted as positive. This decision chain did not result in a clear interpretation for functional MRI measures (fMRI; k = 3). Increased fMRI activation (as measured by the Blood Oxygen Level-Dependent [BOLD] signal) during cognitive tasks (active-state fMRI) could be interpreted as a reflection of greater flexible processing, but decreased activation could also be interpreted as improved learning and an efficiency effect . The effect size of these three studies were, according to the authors’ interpretation, initially labeled as positive. The meta-analytic results that included these studies were only interpreted in terms of changes in neurophysiological functioning, i.e. no conclusions were made about the direction of the results (beneficial or detrimental). To determine the valence of the meta-analytic effects, we performed a sensitivity analysis by means of additional meta-analysis leaving out the fMRI findings.” To prevent inflation of homogeneity due to correlated observations, effect sizes of multiple outcome measures from the same study were averaged before calculation of the meta-analytic effect sizes . Meta-analytic effect sizes were calculated using the random model to correct for heterogeneity between studies introduced by differences in experimental design, measurement modality and analytic design. The derived meta-analytic effect sizes were further weighted by the study inverse variance, thereby accounting for sample size and measurement error . Meta-analytic effects were interpreted using Cohen’s guidelines, including definitions of small ( d = 0.2−0.5), moderate ( d = 0.5−0.7) and large effect sizes ( d > 0.7; ). Heterogeneity of eff ;ect sizes was assessed using the I 2 statistic, where values of 25 %, 50 % and 75 % were indicative of low, moderate and high heterogeneity, respectively . As the I 2 statistic can be biased in meta-analyses with small samples, confidence intervals for each effect size were included . Meta-analytic effect sizes were subjected to analyses of robustness (leave-one-out analysis and p-curve analysis) and the possibility of publication bias (Rosenthal’s fail-safe n , Egger funnel plot asymmetry and the test of excessive significance). The leave-one-out method was used to check the influence of single studies by iteratively removing each individual study from the calculation of the meta-analytic effect sizes. We explored whether meta-analytic effects where disproportionately driven by a single study by visual interpretation of the leave-one-out forest plots in combination with the statistical results . P-curve analysis was performed to check whether the distribution of effect sizes that contribute to a significant meta-analytic effect size is indicative of a true effect (i.e. the meta-analytic effect has evidential value; ). P-curves were created using the p-curve application ( http://www.p-curve.com/app4 , 2018), where all p-values below .05 are plotted on the x-axis and the percentage of studies yielding these p-values are plotted on the y-axis. According to , (1) right skewed p-curves are indicative of evidential value, as true effects tend to be highly significant, (2) flat p-curves indicate no evidential value and (3) left skewed p-curves are indicative of flexibility in data-analysis (p-hacking), as these contain many p-values just below .05. Evidential value in a set of studies requires that the curve for p-values lower than .025 (i.e. the half p-curve), is significantly right skewed (p-value test for right skew <.05), or that the p-value of this skewness test is at p < .10 for both the full and half p-curve . In case the results indicate no evidential value, a follow-up test is performed to test whether the set of studies had insufficient power to detect evidential value . If this test is significant (p < .05), the conclusion is that the set of studies does not have sufficient analytic power to detect evidential value. Publication bias was assessed by Egger funnel plot asymmetry , the Rosenthal’s fail-safe n and the test of excessive significance . Rosenthal’s fail-safe n was calculated to determine the necessary number of studies to nullify the overall effect. Fail-safe n values > 5k + 10 were considered robust, where k refers to the number of samples on which the relevant effect size was calculated. Furthermore, the test of excessive significance was used to compare the observed number of significant studies to the expected number of significant studies with a χ 2 -test ( p < .10). The expected number was based on the estimated overall effect-size and the power to detect this effect in the individual studies. All tests of significance were two-sided with α = .05.
Study selection This systematic review and meta-analysis included empirical studies that: (1) used an RCT or cross-over design, (2) examined the effects of moderate to vigorous physical activity on brain structure and/or neurophysiological functioning, where moderate to vigorous physical activity was defined as physical activity that requires a moderate amount of effort and noticeably accelerates the heart rate , (3) included children with an average age between 5–12 years old (4) and included a no intervention control group (RCTs) or control condition (cross-over trials). The electronic databases PubMed, Embase, SportDiscus and Cochrane Library were searched combining search terms (MeSH and thesaurus terms) related to physical exercise and children, and Brain Imaging or electroencephalography and their equivalents (Table 1A, see Appendix; last search December 2019). The reference lists of all included articles were manually searched for additional relevant articles. This systematic review and meta-analysis was performed according to PRISMA guidelines . The article identification, screening and selection process was performed by two independent reviewers (AM + GV, ). The initial search retrieved 2275 unique articles, of which 37 articles were deemed relevant based on the screening of title and abstract. These 37 articles were further assessed for eligibility based on full-texts, after which 23 articles met all inclusion criteria. Two studies were excluded because of contaminating factors such as no sufficient intensity of the physical activity intervention (relaxation; ) or the assessment of neurophysiological functioning in relation to the processing of food stimuli . Finally, a total of 26 articles was included in the narrative review, of which 20 articles were suitable for meta-analysis.
Data extraction The following data were extracted from the included articles: (1) sample characteristics (for each study group: sample size, mean age and sex distribution); (2) intervention or control features (type, intensity and frequency of physical activity or control sessions); (3) outcome measures (imaging modality and cognitive tests assessed, if available).
Risk of bias assessment The quality of included studies was independently assessed by two authors (AM + GV) using the Cochrane Collaboration’s tool for risk of bias in randomized trials . This tool examines selection bias (random sequence generation and allocation concealment), performance bias (blinding participants and personnel), detection bias (blinding of outcome assessment), attrition bias (participants lost during study) and reporting bias (selective outcome reporting of prespecified outcome measures in methods sections or clinical trial registers). In addition, we evaluated all studies on sampling bias (representativeness of the sample for the targeted pediatric population). For each of these bias categories, studies were classified into low, unclear, or high risk of bias. Inter-rater discrepancies were resolved by consensus.
Statistical analysis Statistical analysis was performed using Comprehensive Meta-Analysis Software (CMA) - version 3 . Meta-analytic effect sizes were calculated separately for acute and chronic effects of physical activity. Further distinction was made between studies concerning (1) brain structure and (2) neurophysiological functioning, with further distinction between (3) children from healthy and (4) clinical samples. In addition, separate meta-analytic effects were calculated for subgroups of studies that reported identical outcome measures in at least two studies (e.g. P3 amplitude, P3 latency). To calculate meta-analytic effect sizes, individual studies’ effect sizes pertaining to each outcome measure were calculated using statistics describing the interaction effect between group and time for RCTs and between condition and session for cross-over trials (n, standard deviation, F value, p-value or the pre/post mean and standard deviation). Beneficial effects of physical activity on outcome measures (in the experimental group/condition as compared to the control group/condition) were expressed as positive effect sizes. Interpretation of neuroimaging measures is not always straightforward . Therefore, interpretation of the individual studies’ effect sizes (positive vs. negative) was based on the following sequential decision chain: (1) the interpretation of the authors, if supported by empirical evidence, (2) the direction of related cognitive effects, or (3) empirical evidence on the developmental course of the outcome measure, in which effects in the maturation direction were interpreted as positive. This decision chain did not result in a clear interpretation for functional MRI measures (fMRI; k = 3). Increased fMRI activation (as measured by the Blood Oxygen Level-Dependent [BOLD] signal) during cognitive tasks (active-state fMRI) could be interpreted as a reflection of greater flexible processing, but decreased activation could also be interpreted as improved learning and an efficiency effect . The effect size of these three studies were, according to the authors’ interpretation, initially labeled as positive. The meta-analytic results that included these studies were only interpreted in terms of changes in neurophysiological functioning, i.e. no conclusions were made about the direction of the results (beneficial or detrimental). To determine the valence of the meta-analytic effects, we performed a sensitivity analysis by means of additional meta-analysis leaving out the fMRI findings.” To prevent inflation of homogeneity due to correlated observations, effect sizes of multiple outcome measures from the same study were averaged before calculation of the meta-analytic effect sizes . Meta-analytic effect sizes were calculated using the random model to correct for heterogeneity between studies introduced by differences in experimental design, measurement modality and analytic design. The derived meta-analytic effect sizes were further weighted by the study inverse variance, thereby accounting for sample size and measurement error . Meta-analytic effects were interpreted using Cohen’s guidelines, including definitions of small ( d = 0.2−0.5), moderate ( d = 0.5−0.7) and large effect sizes ( d > 0.7; ). Heterogeneity of eff ;ect sizes was assessed using the I 2 statistic, where values of 25 %, 50 % and 75 % were indicative of low, moderate and high heterogeneity, respectively . As the I 2 statistic can be biased in meta-analyses with small samples, confidence intervals for each effect size were included . Meta-analytic effect sizes were subjected to analyses of robustness (leave-one-out analysis and p-curve analysis) and the possibility of publication bias (Rosenthal’s fail-safe n , Egger funnel plot asymmetry and the test of excessive significance). The leave-one-out method was used to check the influence of single studies by iteratively removing each individual study from the calculation of the meta-analytic effect sizes. We explored whether meta-analytic effects where disproportionately driven by a single study by visual interpretation of the leave-one-out forest plots in combination with the statistical results . P-curve analysis was performed to check whether the distribution of effect sizes that contribute to a significant meta-analytic effect size is indicative of a true effect (i.e. the meta-analytic effect has evidential value; ). P-curves were created using the p-curve application ( http://www.p-curve.com/app4 , 2018), where all p-values below .05 are plotted on the x-axis and the percentage of studies yielding these p-values are plotted on the y-axis. According to , (1) right skewed p-curves are indicative of evidential value, as true effects tend to be highly significant, (2) flat p-curves indicate no evidential value and (3) left skewed p-curves are indicative of flexibility in data-analysis (p-hacking), as these contain many p-values just below .05. Evidential value in a set of studies requires that the curve for p-values lower than .025 (i.e. the half p-curve), is significantly right skewed (p-value test for right skew <.05), or that the p-value of this skewness test is at p < .10 for both the full and half p-curve . In case the results indicate no evidential value, a follow-up test is performed to test whether the set of studies had insufficient power to detect evidential value . If this test is significant (p < .05), the conclusion is that the set of studies does not have sufficient analytic power to detect evidential value. Publication bias was assessed by Egger funnel plot asymmetry , the Rosenthal’s fail-safe n and the test of excessive significance . Rosenthal’s fail-safe n was calculated to determine the necessary number of studies to nullify the overall effect. Fail-safe n values > 5k + 10 were considered robust, where k refers to the number of samples on which the relevant effect size was calculated. Furthermore, the test of excessive significance was used to compare the observed number of significant studies to the expected number of significant studies with a χ 2 -test ( p < .10). The expected number was based on the estimated overall effect-size and the power to detect this effect in the individual studies. All tests of significance were two-sided with α = .05.
Results This results section provides a narrative description of the findings of all studies ( k = 26), followed by the meta-analytic findings ( k = 20). We first distinguish between studies addressing acute and chronic effects. Further distinction is made between (1) studies focusing on brain structure and neurophysiological functioning and (2) studies in healthy and clinical samples. The main study characteristics are presented in (acute effects) and (chronic effects). An explanation and interpretation of the effect direction of all imaging measures is presented in . 3.1 Acute effects of physical activity No studies addressed the acute effects of physical activity on brain structure. One study addressed the acute effects of physical activity on cerebral blood flow . Results indicated no acute effects of physical activity in cerebral blood flow in the frontoparietal, executive control, and motor networks. Eleven cross-over studies addressed the acute effects on neurophysiological functioning, of which seven included healthy children (EEG; k = 6; MRI; k = 1). All studies in healthy children showed physical activity-induced acute effects on neurophysiological functioning. Results indicated improved neurophysiological functioning during rest and goal-directed behavior , greater allocation of attentional resources during task performance and improved conflict processing . Three of the seven studies reported accompanying beneficial effects on measures of cognitive performance or academic functioning . Four studies addressed the acute effects of short-term physical activity on neurophysiological functioning in clinical samples (i.e. ADHD; EEG k = 4) and all these studies indicated physical activity-induced beneficial effects. Results indicate greater allocation of attentional resources toward the target stimulus , shorter processing time , improved anticipatory attention performance and motor preparation and an improved theta/beta ratio in resting EEG . Two studies reported on co-occurring beneficial cognitive effects and beneficial effects on academic performance . 3.2 Chronic effects of physical activity Four studies (MRI k = 4) described the chronic effects on brain structure in which one study included a healthy population and three studies included a clinical population (obesity k = 2; deafness k = 1). All studies used Diffusion Tensor Imaging, which is an MRI-based measure of white matter integrity (WMI). The study that assessed healthy children observed greater WMI in the genu of the corpus collosum following long-term physical activity compared to the control group . The two studies assessing obese children observed greater WMI following long-term physical activity compared to a control group. None of the two studies reported concomitant objective cognitive measures. In contrast, a study in deaf children found decreased WMI following long-term physical activity . The study also observed accompanying effects on measures of cognitive performance, of which some effects were beneficial, while others were detrimental. Ten studies described the chronic effects on neurophysiological functioning of which six studies focused on healthy children (EEG; k = 5; MRI: k = 1). Five studies in healthy children showed physical activity-induced effects on neurophysiological functioning. Results indicated improved resting-state attention and altered brain activation in the right anterior PFC , improved error detection , greater efficiency of attention and motor processes , greater allocation of attentional resources during goal-directed behavior and shorter processing time . In addition, the observed changes in neurophysiological functioning were accompanied by improved cognitive task performance in all studies . Four studies described the chronic effects on neurophysiological functioning on clinical samples (EEG; k = 1; MRI; k = 3). One study investigated the chronic effects of physical activity in children with ADHD and found an improved state of alertness after long-term physical activity as measured by EEG. This result was not accompanied by improved cognitive task performance . All three studies investigating the chronic effects of long-term physical activity in obese children indicated changes in neurophysiological functioning as measured using fMRI. Results indicate altered brain activity during goal-direct behavior and resting-state . None of these studies observed accompanying beneficial effects on cognitive task performance. 3.3 Risk of bias Results of the risk of bias assessment, using the Cochrane Collaboration’s tool for risk of bias in randomized trials, are shown in . The overall risk of bias of the included studies varied, but was generally low. However, in only five studies outcome assessors were blinded and in ten studies the included population was not a representative sample for the general healthy or clinical pediatric population . Five of the 26 studies (19 %) were preregistered in clinical trial registers. Of these studies, all reported outcome measures included in meta-analysis were preregistered. Conversely, not all preregistered outcome measures were reported in the available articles 3.4 Meta-analysis Results of the meta-analysis are displayed in and . There were no studies available that investigated acute effects of physical activity on brain structure. Meta-analyses of studies which observed the acute effects of physical activity revealed a significant small-sized effect of physical activity on neurophysiological function ( d = 0.32 , p = 0.044). Further distinction between children from healthy samples and clinical samples revealed no significant meta-analytic effects in these subgroups. No significant meta-analytic effects were found for the chronic effects of physical activity on brain structure. Meta-analyses of studies that observed the chronic effects of physical activity on neurophysiological function revealed a significant small-sized positive effect of physical activity ( d = 0.39 , p < 0.001).Analyses making a further distinction between children from healthy and clinical samples revealed a significant small effect for healthy children ( d = 0.32 , p = 0.002) and a large effect for children from clinical populations ( d = 0.94 , p = 0.014). We estimated separate meta-analytic effects for specific outcome measures that were available in at least two studies. We found a significant small-sized effect of physical activity on P3 amplitude ( d = 0.39, p = 0.001). When we made a distinction between acute and chronic effects, meta-analytic effects could only be determined for acute effects, for which a significant small-sized effect was found ( d = 0.42, p = 0.006). In further distinctions between healthy and clinical samples, meta-analytic effect sizes could only be determined for healthy samples. Meta-analyses revealed significant small-sized effect on P3 amplitude for acute effects of physical activity for healthy samples ( d = 0.42, p = 0.002). For P3 latency, meta-analytic effects could only be determined for all studies together, acute effects of physical activity and the acute effects of physical activity for healthy children. No significant effects were observed. Lastly, we performed a sensitivity analysis to explore the possibility that our strategy to value the physical activity-induced changes in brain activation in the active-state fMRI studies as beneficial, influenced our results (see Appendix, Table A2). We reran all analyses after excluding the results of all three active-state fMRI studies , replicating all the reported significant meta-analytic effects on neurophysiological functioning, with the exception of the effect size for the acute effects of physical activity on neurophysiological functioning, which did not reach significance in the sensitivity analysis ( d = 0.23, p = 0.093). The sensitivity analysis could not be executed for the effect size describing chronic effects on neurophysiological functioning in clinical populations, because the number of available studies dropped below the minimum of two studies required for meta-analysis. 3.5 Heterogeneity, robustness & publication bias The heterogeneity ranged from 0 to 69%, with only high values of heterogeneity for the meta-analysis effect describing the chronic effects of physical activity on brain structure. The leave-one-out analysis revealed that none of the individual studies had an extreme influence on the meta-analytic effect sizes (Fig. 1A, see Appendix) for acute effects of physical activity on neurophysiological functioning (range: d = .232–.416, p = .004–.119), chronic effects on neurophysiological function (range: d = .349–.489; p = .000–.013) and acute effects on P3 amplitude (range: d = .241–.490; p = .000–.073). These findings indicate stability of the meta-analytic effect size estimations. For the meta-analytic effect size of acute effects of physical activity on neurophysiological functioning, it should be noted that the p-values in the leave-one-out analysis did not consistently meet conventional levels of significance (p = .004–.119). This possibly reflects heterogeneity among study outcomes underlying the effect size, and warrants caution when interpreting this result. P-curve analysis for all significant meta-analytic effects (studies concerning acute and chronic effects of physical activity on neurophysiological function and studies concerning acute effects using P3 amplitude) indicated the presence of evidential value (all tests for right skewness of half p-curves: ps < .01). P-curve analysis could not be executed for the significant chronic effects on neurophysiological function in clinical populations and for acute effects on P3 amplitude in healthy children, because the number of studies with significant effects was too low. See Appendix, Figs. A2–A5 for the p-curve plots. Analysis of Egger funnel plot asymmetry revealed no evidence for publication bias ( ps = .19–.91). Fail-safe N values indicated that the reported meta-analytic effects were not robust against influence of publication bias. However, the number of positive findings is as expected from the power of the retrieved studies, reflecting no evidence of excess significance for all meta-analytic results (χ 2 = 0.001–5.048, ps > 0.10).
Acute effects of physical activity No studies addressed the acute effects of physical activity on brain structure. One study addressed the acute effects of physical activity on cerebral blood flow . Results indicated no acute effects of physical activity in cerebral blood flow in the frontoparietal, executive control, and motor networks. Eleven cross-over studies addressed the acute effects on neurophysiological functioning, of which seven included healthy children (EEG; k = 6; MRI; k = 1). All studies in healthy children showed physical activity-induced acute effects on neurophysiological functioning. Results indicated improved neurophysiological functioning during rest and goal-directed behavior , greater allocation of attentional resources during task performance and improved conflict processing . Three of the seven studies reported accompanying beneficial effects on measures of cognitive performance or academic functioning . Four studies addressed the acute effects of short-term physical activity on neurophysiological functioning in clinical samples (i.e. ADHD; EEG k = 4) and all these studies indicated physical activity-induced beneficial effects. Results indicate greater allocation of attentional resources toward the target stimulus , shorter processing time , improved anticipatory attention performance and motor preparation and an improved theta/beta ratio in resting EEG . Two studies reported on co-occurring beneficial cognitive effects and beneficial effects on academic performance .
Chronic effects of physical activity Four studies (MRI k = 4) described the chronic effects on brain structure in which one study included a healthy population and three studies included a clinical population (obesity k = 2; deafness k = 1). All studies used Diffusion Tensor Imaging, which is an MRI-based measure of white matter integrity (WMI). The study that assessed healthy children observed greater WMI in the genu of the corpus collosum following long-term physical activity compared to the control group . The two studies assessing obese children observed greater WMI following long-term physical activity compared to a control group. None of the two studies reported concomitant objective cognitive measures. In contrast, a study in deaf children found decreased WMI following long-term physical activity . The study also observed accompanying effects on measures of cognitive performance, of which some effects were beneficial, while others were detrimental. Ten studies described the chronic effects on neurophysiological functioning of which six studies focused on healthy children (EEG; k = 5; MRI: k = 1). Five studies in healthy children showed physical activity-induced effects on neurophysiological functioning. Results indicated improved resting-state attention and altered brain activation in the right anterior PFC , improved error detection , greater efficiency of attention and motor processes , greater allocation of attentional resources during goal-directed behavior and shorter processing time . In addition, the observed changes in neurophysiological functioning were accompanied by improved cognitive task performance in all studies . Four studies described the chronic effects on neurophysiological functioning on clinical samples (EEG; k = 1; MRI; k = 3). One study investigated the chronic effects of physical activity in children with ADHD and found an improved state of alertness after long-term physical activity as measured by EEG. This result was not accompanied by improved cognitive task performance . All three studies investigating the chronic effects of long-term physical activity in obese children indicated changes in neurophysiological functioning as measured using fMRI. Results indicate altered brain activity during goal-direct behavior and resting-state . None of these studies observed accompanying beneficial effects on cognitive task performance.
Risk of bias Results of the risk of bias assessment, using the Cochrane Collaboration’s tool for risk of bias in randomized trials, are shown in . The overall risk of bias of the included studies varied, but was generally low. However, in only five studies outcome assessors were blinded and in ten studies the included population was not a representative sample for the general healthy or clinical pediatric population . Five of the 26 studies (19 %) were preregistered in clinical trial registers. Of these studies, all reported outcome measures included in meta-analysis were preregistered. Conversely, not all preregistered outcome measures were reported in the available articles
Meta-analysis Results of the meta-analysis are displayed in and . There were no studies available that investigated acute effects of physical activity on brain structure. Meta-analyses of studies which observed the acute effects of physical activity revealed a significant small-sized effect of physical activity on neurophysiological function ( d = 0.32 , p = 0.044). Further distinction between children from healthy samples and clinical samples revealed no significant meta-analytic effects in these subgroups. No significant meta-analytic effects were found for the chronic effects of physical activity on brain structure. Meta-analyses of studies that observed the chronic effects of physical activity on neurophysiological function revealed a significant small-sized positive effect of physical activity ( d = 0.39 , p < 0.001).Analyses making a further distinction between children from healthy and clinical samples revealed a significant small effect for healthy children ( d = 0.32 , p = 0.002) and a large effect for children from clinical populations ( d = 0.94 , p = 0.014). We estimated separate meta-analytic effects for specific outcome measures that were available in at least two studies. We found a significant small-sized effect of physical activity on P3 amplitude ( d = 0.39, p = 0.001). When we made a distinction between acute and chronic effects, meta-analytic effects could only be determined for acute effects, for which a significant small-sized effect was found ( d = 0.42, p = 0.006). In further distinctions between healthy and clinical samples, meta-analytic effect sizes could only be determined for healthy samples. Meta-analyses revealed significant small-sized effect on P3 amplitude for acute effects of physical activity for healthy samples ( d = 0.42, p = 0.002). For P3 latency, meta-analytic effects could only be determined for all studies together, acute effects of physical activity and the acute effects of physical activity for healthy children. No significant effects were observed. Lastly, we performed a sensitivity analysis to explore the possibility that our strategy to value the physical activity-induced changes in brain activation in the active-state fMRI studies as beneficial, influenced our results (see Appendix, Table A2). We reran all analyses after excluding the results of all three active-state fMRI studies , replicating all the reported significant meta-analytic effects on neurophysiological functioning, with the exception of the effect size for the acute effects of physical activity on neurophysiological functioning, which did not reach significance in the sensitivity analysis ( d = 0.23, p = 0.093). The sensitivity analysis could not be executed for the effect size describing chronic effects on neurophysiological functioning in clinical populations, because the number of available studies dropped below the minimum of two studies required for meta-analysis.
Heterogeneity, robustness & publication bias The heterogeneity ranged from 0 to 69%, with only high values of heterogeneity for the meta-analysis effect describing the chronic effects of physical activity on brain structure. The leave-one-out analysis revealed that none of the individual studies had an extreme influence on the meta-analytic effect sizes (Fig. 1A, see Appendix) for acute effects of physical activity on neurophysiological functioning (range: d = .232–.416, p = .004–.119), chronic effects on neurophysiological function (range: d = .349–.489; p = .000–.013) and acute effects on P3 amplitude (range: d = .241–.490; p = .000–.073). These findings indicate stability of the meta-analytic effect size estimations. For the meta-analytic effect size of acute effects of physical activity on neurophysiological functioning, it should be noted that the p-values in the leave-one-out analysis did not consistently meet conventional levels of significance (p = .004–.119). This possibly reflects heterogeneity among study outcomes underlying the effect size, and warrants caution when interpreting this result. P-curve analysis for all significant meta-analytic effects (studies concerning acute and chronic effects of physical activity on neurophysiological function and studies concerning acute effects using P3 amplitude) indicated the presence of evidential value (all tests for right skewness of half p-curves: ps < .01). P-curve analysis could not be executed for the significant chronic effects on neurophysiological function in clinical populations and for acute effects on P3 amplitude in healthy children, because the number of studies with significant effects was too low. See Appendix, Figs. A2–A5 for the p-curve plots. Analysis of Egger funnel plot asymmetry revealed no evidence for publication bias ( ps = .19–.91). Fail-safe N values indicated that the reported meta-analytic effects were not robust against influence of publication bias. However, the number of positive findings is as expected from the power of the retrieved studies, reflecting no evidence of excess significance for all meta-analytic results (χ 2 = 0.001–5.048, ps > 0.10).
Discussion This study is the first systematic review and meta-analysis focusing on the causal effects of physical activity on brain structure and neurophysiological functioning in children from healthy and clinical samples. Based on 26 studies with RCT or crossover designs representing 973 unique children, the results provide evidence of physical activity-induced changes on neuroimaging measures and in particular small-sized beneficial effects of physical activity on neurophysiological functioning in children. These findings underline the importance of physical activity for brain development in children. The current study differentiated between acute effects resulting from a single bout of physical activity (or short-term physical activity) and chronic effects resulting from longer periods of continuous physical activity (long-term physical activity). Meta-analysis revealed support for both acute and chronic effects on neurophysiological functioning, while no evidence for effects on brain structure was found. This observed discrepancy is primarily accounted for by a very limited number of studies ( k = 4) that assessed the effects of physical activity on brain structure, that also assessed heterogeneous samples of healthy, obese and deaf children. It is unknown whether these groups respond comparably to physical activity. If sample-specific mechanisms may contribute the effects of physical activity, this may have contributed to heterogeneity in the combined effect size. Analyses aimed at specific neuroimaging measures showed that acute effects of physical activity could primarily be driven by changes in the allocation of attentional resources (P3 amplitude), rather than changes in the processing time (P3 latency). This is in line with the results of a recent systematic review indicating that physical activity and cardiorespiratory fitness are associated with P3b modulation during cognitive control and attention tasks . Although this hypothesis awaits replication in future research, a specific effect of physical activity on attention resource allocation would be highly relevant when physical activity is considered as an intervention to promote cognitive functioning in diverse populations with poor attentional skills, such as otherwise normally developing children at school or clinical groups such as children suffering from ADHD. The current review and meta-analysis made a distinction between studies in healthy and clinical samples of children. Although we found no evidence for differences between healthy and clinical populations in the effect magnitude of physical activity, the possibility exists that the dominant mechanisms of action underlying the effects of physical activity on brain structure and neurophysiological function (partly) depend on health status and on the pathophysiology of the disorders studied . For example, it is suggested that physical activity might be a particularly powerful treatment of ADHD because it is supposed to upregulate dopamine and norepinephrine, two neurotransmitters that are both implicated in the pathophysiology of the disorder . Interestingly, upregulation of dopamine and norepinephrine is also suggested to underlie the beneficial effects of stimulant medication used to treat ADHD and alleviating symptoms of the disorder . Likewise, vasoactive effects on cerebral arteries and neurotoxicity by hyperinsulinemia are suggested to play a crucial role in altered brain structure and function in people with obesity and may be counteracted by physical activity . The current study does not allow to draw conclusions about effects of physical activity in clinical populations because of the heterogeneous pediatric populations studied (ADHD, obese and deaf children) and because studies into acute effects focused exclusively on children with ADHD whereas studies into chronic effects were primarily focused on obese children. To provide a better understanding of the potential of physical activity programs as a treatment approach in clinical populations, future studies should elucidate whether the effects of physical activity interact with health status and, more specifically, with the underlying pathophysiological processes that are supposed to be targeted by physical activity. We have taken all efforts to carefully interpret the valence of meta-analytic effects, using a sequential decision chain for interpretation of the studies’ individual effect sizes. Nevertheless, the interpretation of changes in fMRI derived measures is a challenging matter (e.g. see de Wit et al., 2016). We performed a sensitivity analysis by repeating our analyses after excluding outcome measures with ambiguous interpretation (i.e. three fMRI studies). Only when the original findings were replicated by the sensitivity analysis, we also provided conclusions about the valence of the observed effects (i.e. whether it was beneficial or not). This sensitivity analysis replicated the meta-analytic effect concerning the chronic effect of physical activity on neurophysiological functioning, indicating that the evidence for neurophysiological changes in response to physical activity indeed involve beneficial effects. Not all meta-analytic results were replicated by sensitivity analysis. The acute effects of physical activity on neurophysiological function were no longer significant without the three active state fMRI studies. The difference between results of the original analysis and the sensitivity analysis can be explained by both a loss of statistical power and a meaningful difference in the effect size. In both cases, the results concerning the acute effects of physical activity, only support that physical activity may induce changes in neurophysiological functioning. The valence of these effects remains unknown. Nevertheless, in the context of the evident benefits of physical activity for the physical health , it may be considered unlikely that any effects of physical activity would have detrimental effects on brain functioning. Future studies that include ambiguously interpretable outcome measures, such as active state fMRI studies, should include parallel cognitive assessment to allow clear interpretations about the valence of the observed effects. The results of the systematic review provide an overview of all findings on cognitive functioning parallel to the observed changes in neural mechanisms. Almost all included studies in the systematic review reported on cognitive performance along with neuroimaging measures (22/26 studies; 85 %). Results showed that co-occurring improvement in at least one measurement of cognitive or academic performance was observed in half of the studies. More specifically, 55 % (6/11) of the studies that observed acute effects of physical activity on neurophysiological function and 42 % (5/12) of the studies that observed chronic effects on neurophysiological function reported co-occurring improvement. These percentages are in line with results of recent systematic reviews and meta-analyses concerning the effects of physical activity on cognition and academic performance in children, in which small to moderate effects were found . Another interesting finding is that only three studies reported significant associations between imaging and cognitive measures . One possible explanation for that neurophysiological effects are not systematically paralleled by behavioral improvement is the typical use of small-sized study samples in neuroimaging research, limiting the statistical power to reveal the pertinent associations. Otherwise, the relation between neurophysiological and behavioral effects of physical activity may be non-linear or a behavioral response to physical activity may not be detected until the neurophysiological response has reached a certain threshold level. This systematic review and meta-analysis has some limitations. Some meta-analytic effect sizes were based on a relatively small numbers of studies, limiting statistical power and representativeness of evidence. Despite our effort to contact authors to provide additional information for inclusion in our meta-analysis, the proportion of missing data was relatively high (38 % of all outcome measures). It should also be noted that only five of the included studies (20 %) were preregistered trials. We used three different approaches to assess presence of and/or robustness to publication bias (Egger funnel plot asymmetry, the Rosenthal’s fail-safe n and the test of excessive significance). The findings indicate limited robustness of the reported effect sizes, but we did not find any evidence for the influence of publication bias on the meta-analytic findings. Nevertheless, the results warrant caution in the interpretation of the obtained effect sizes. However, although evidential value could not be analyzed in two effect sizes (i.e. chronic effects on neurophysiological function in clinical populations and acute effects on P3 amplitude in healthy children) and some instability has been noticed in the robustness of a single meta-analytic result (acute effects on neurophysiological functioning), comprehensive bias analysis on all other significant meta-analytic effects provided no lack of evidential value or limited robustness. Although the majority of studies (81 %) did not adopt proper procedures for blinding of intervention delivery and outcome assessment and almost half of the studies (42 %) included a sample that was not representative for the targeted population, the overall risk of bias was generally low among the included studies. The current systematic review and meta-analysis shows that long-term physical activity leads to beneficial changes in neurophysiological function. In addition, short-term physical activity may induce changes in neurophysiological functioning, although this evidence showed limited robustness. Furthermore, there is preliminary evidence indicating that physical activity could be a useful intervention to promote neurophysiological functioning (and cognitive functioning) in diverse pediatric populations. However, more research is required to gain knowledge on the effects of physical activity in such specific populations. High-quality intervention studies should include both neuroimaging techniques and behavioral outcomes. Given the signs of limited robustness of the available evidence, future studies should also consider pre-registration to limit the influence of publication bias in this field. Nevertheless, to this date, the current study presents an overview of the best available evidence regarding the causal effects of physical activity on brain structure and neurophysiological functioning in children and underlines the importance of physical activity for brain development during childhood.
All authors have no conflicts of interest relevant to this article to disclose.
|
The validity and reliability of remote diabetic foot ulcer assessment using mobile phone images | 79150ee9-ad44-4600-83ea-81373b86a52f | 5573347 | Pathology[mh] | Diabetic foot ulcers are a major health problem with significant morbidity and mortality , . Recent global pooled estimates indicate 3.4% of all inpatients have a diabetic foot ulcer, and 1.5% a diabetes-related amputation procedure at any given time . These diabetic foot ulcers lead to major healthcare expenditures and a reduction in people’s quality of life – , and nearly all amputations in people with diabetes are precipitated by a non-healing foot ulcer , . People with diabetic foot ulcers require frequent evidence-based treatment in highly-skilled interdisciplinary foot clinics , . This typically involves weekly ulcer treatment visits to the foot clinic and additional self-care of the ulcer at home between clinic visits by themselves, their carers or home care nurses , . As treatment to achieve ulcer healing often lasts for three months or more , this requires multiple clinic visits. These frequent clinic visits can often be a burden to patients in terms of their time, effort and finances, especially for people living in rural and remote areas or people with travel difficulties. Additionally, weekly visits may still not be enough to detect deterioration of foot ulcers in sufficient time to prevent hospitalisation or amputation, as infections may develop and progress to life-threatening severe infections over just days , . To overcome these limitations and to empower people with diabetic foot ulcers in their self-care away from the clinic, various telemedicine systems have been investigated. The cornerstone of these telemedicine systems is clinical assessment of digital photographic images. Three studies used wound assessment platforms with uploaded high-resolution images from digital cameras – . However, these platforms can only be used by health professionals, thereby not increasing patient empowerment. Other studies investigated specially developed advanced stand-alone imaging devices – . These have shown high reliability for measuring ulcer area size and high validity for diagnosing the presence of an ulcer or callus. However, measuring these characteristics is not enough to reduce clinic visits or improve timely detection of limb-threatening disease. For that, more detailed clinical characteristics or treatment decisions (such as the diagnosis of infection, presence of exudate, the need for debridement) need to be assessed. These have only been investigated scarcely, with low validity found , . In addition to these validity and reliability findings, all telemedicine systems investigated were expensive and require highly technical equipment – . Anecdotally, clinicians and patients have overcome these practical disadvantages by using mobile phones to capture photographic images of ulcers instead of using these wound assessment platforms. We have seen mobile phone images used for this purpose routinely in multiple daily clinical practice situations; for example by home-care nurses or by patients for unofficial ‘telemedicine’ consultations with an interdisciplinary foot clinic. To our knowledge, only two studies have investigated the use of mobile phone images for diabetic foot ulcer assessment , . One of them investigated ulcer area measurement only, but did not investigate assessment of any other clinical characteristics . The other study, by Rasmussen and colleagues, compared a standard live clinical assessment with a remote assessment based on images from both an i-phone and a new imaging device . They reported low kappa values for the diagnosis of sixteen clinical characteristics, based on the combined assessment of two remote observers. However, kappa values are a measure of agreement reflecting reliability and are not representative of validity; whereas likelihood ratios, sensitivity, and specificity are the statistics that should be used to analyse validity , , but these were not reported in this paper . Perhaps even more importantly, any treatment decisions based on the clinical characteristics assessed in the i-phone images were also not assessed. Limited agreement in assessment of clinical characteristics may in fact still lead to similar treatment decisions; however this is unknown. Therefore, despite their potential for use as a practical telemedicine system in diabetic foot ulcer treatment, diagnostic validity and reliability of assessment of diabetic foot ulcers using mobile phone images is unknown. The aim of this study was to determine the validity and reliability of remote assessment of diabetic foot ulcer clinical characteristics and treatment decisions using photographs produced on a mobile phone in comparison to a reference standard live clinical assessment.
Design A prospective diagnostic validity and reliability study design was used. The Standards for Reporting Diagnostic accuracy studies (STARD) were used for reporting the study . Participants Eligible participants were adults with an existing diabetic foot ulcer who provided written informed consent. People with a cognitive deficit that would impair their ability to read and write or complete certain technical aspects of the study were excluded. Participants were recruited between July and September 2015 from four Diabetic Foot Clinics within the Metro North Hospital and Health Service and the Metro South Hospital and Health Service, Brisbane, Queensland, Australia. Eligible remote observers were five registered clinical podiatrists with different levels of experience in the management of diabetic foot ulcers who provided written informed consent. For the purposes of this study, observer experience levels in diabetic foot ulcer management were differentiated using formal Health Practitioner Level (HPL) of employment appointment and years of experience specifically treating people with diabetic foot ulcers. The five remote observers recruited were: observer 1, a ‘specialist clinician (HPL5)’ podiatrist with 13 years of diabetic foot experience; observer 2, a ‘senior clinician (HPL4)’ podiatrist with 10 years of diabetic foot experience; observer 3, a HPL4 podiatrist with 8 years of diabetic foot experience; observer 4, a HPL4 podiatrist with 6 years of diabetic foot experience; and observer 5, a newly graduated ‘clinician (HPL3)’ podiatrist with no diabetic foot experience. The observers were recruited from Diabetic Foot Clinics that were independent of the four Diabetic Foot Clinics that the participants were recruited from, and therefore, none of them were involved in the clinical care of the participants. The reference standard live clinical assessment was the criterion measure used for the purposes of this study and defined as an in-person clinical assessment by a registered clinical podiatrist with significant experience in the management of diabetic foot ulcers. A full-time Podiatry Clinical Educator in the management of diabetic foot ulcers was chosen to perform the reference standard live clinical assessments. The educator was responsible for diabetic foot ulcer education and training at the Queensland University of Technology and had been responsible for new graduate clinical support within Queensland Health in the management of diabetic foot ulcers for the past five years. Prior to this appointment, the clinical educator had been a HPL5 specialist clinical podiatrist with four years of specific diabetic foot experience. Based on the ability to find kappa values of >0.40 , with an anticipated 30–50% prevalence of the clinical characteristics, 80% power and alpha <0.05, a sample size of 50 participants was needed , which is somewhat larger compared to the samples sizes ranging from 20–36 participants in related studies – . Procedures After providing informed consent, participants underwent the reference standard live clinical assessment. This visual assessment comprised of completing the 12 items from the study form (Table ). The items from the study form were similar to those used in the study by Bowling and colleagues , and included 9 clinical characteristic and 2 treatment decision items. One additional treatment decision item was added: “If this person wasn’t seen in the clinic, select the time-frame for when this person should be seen in-person”. This question was used to represent the standard final treatment decision of any consultation on the urgency of follow-up treatment. In this case when a clinician or patient sends mobile phone images of a diabetic foot ulcer to an expert clinician seeking urgent clinical decision advice on when they should next attend the clinic for care: the answers “same day” and “next day” were categorised as “urgent treatment by a health professional required”, with the remaining answers categorised as “no urgent treatment by a health professional required”. Additionally, pre-existing clinical information of the Queensland High Risk Foot Form (QHRFF) was used . The QHRFF is a reliable and valid research tool for foot disease with substantial and near-perfect inter-observer agreement, has been extensively described elsewhere, and is standard of care in Queensland . For the purpose of the current study, the age, gender, diabetes, co-morbidities, foot disease history (all self-reported by patients, as described in the QHRFF), most recent HbA1c, and clinical diagnoses of peripheral neuropathy (“present” or “absent”) and peripheral artery disease (“nil”, “moderate”, or “critical”) were used . Immediately following the live clinical assessment, four non-identifiable photos of the ulcer were taken by an independent research assistant using an i-phone 4 (Apple Inc, Cupertino, CA, United States of America), with image resolution 1936 × 2592 pixels. When more than one ulcer was present, the largest ulcer was selected as the target ulcer and placed in the centre of the field of view. Research assistants were provided with 1 hour training in the use of the mobile phone for taking diabetic foot ulcer images before the study, using a standard presentation and a foot model with a foot ulcer to practice. They had limited or no clinical experience in working with people with diabetic foot ulcers, to mirror the similar experience of photo taking by patients, relatives or home-care nurses without specific diabetic foot ulcer experience, and thereby to avoid potential bias from having an experienced clinician taking the photos. The four photos included: i) close-up of the ulcer ensuring that the majority of the wound is in the frame; ii) a mid-way shot, positioning the camera to capture at least a 4–6 cm border around the wound to assess status of skin and tissue integrity; iii) a distant shot showing the foot in its entirety (with the wound in view); and iv) a mid-way shot, positioning the camera to capture the opposite side of the foot from where the wound is situated to identify any significant infection or tissue quality and/or colour changes. See Fig. for an example. The mobile phones were not connected to a telecommunications network and were only used for the purposes of taking images of ulcers for this study. After all measurements had been taken, the remote observers were provided with the mobile phone images and the additional clinical information from the QHRFF. The information from the QHRFF was provided as it was thought that remote mobile phone assessment will not occur in isolation in daily clinical practice, but with patients from whom it can be expected that some baseline information would be available to the clinician. The observers were asked to complete the 12-item study form for the target ulcer (Table ) and were allowed to manipulate the touch screen to further expand the images to more accurately reflect current practice with mobile phone image use. All observers were blinded from any of the other observers’ remote assessments. Additionally, the observers were asked if the image quality allowed them to adequately assess the target ulcer, with their options ranging from 1 (strongly agree) to 5 (strongly disagree). A minimum of two weeks after their first assessment, the observers completed the same assessment of the same mobile phone images again, without having access to their previous assessment. The procedures were approved by Human Research Ethics Committee of the Prince Charles Hospital, Brisbane, Queensland, Australia (HREC/14/QPCH/204). All procedures were in accordance with the principles of the Declaration of Helsinki. Outcome Measures Validity of remote mobile phone assessment of diabetic foot ulcers was analysed by calculating the following diagnostic values: sensitivity, specificity, positive likelihood ratio (LLR+) and negative likelihood ratio (LLR−) between the reference standard live clinical assessment and the first assessment made by each of the individual remote observers. Although live clinical assessment can be inherently subjective, it is the internationally accepted reference standard for clinical characteristics, clinical decision-making and treatment planning , has been demonstrated to be reliable using the QHRFF and has been used in similar studies , , , . It is also the most reflective reference standard of daily practice in which remote mobile phone assessment may be used. The primary endpoints chosen were LLR+ and LLR−, as they provide the most meaningful outcome for clinical decision-making , . An LLR+ >5 or LLR− <0.2 indicated “strong” diagnostic evidence, and an LLR+ >10 or LLR− <0.1 indicated “convincing” diagnostic evidence , . Sensitivity and specificity were secondary endpoints. These values need to be “high” to either rule out or confirm a disease, but as these values also depend on prevalence no generally agreed on hard cut-off score is available , . We chose >80% for sensitivity and specificity as “high” . Reliability of remote mobile phone assessment of diabetic foot ulcers was analysed by calculating inter-observer and test-retest reliability. Inter-observer reliability was determined by calculating free marginal multirater Randolph’s kappa values , , and test-retest reliability was determined by calculating free marginal bi-rater Bennett kappa values . Free marginal kappa values were calculated because raters’ distributions of cases into categories were not restricted for any of the observations made. Values > 0.7 were considered “adequate” agreement , . Prevalence of the clinical characteristics assessed during live assessment was unknown to the observers, and the assumption was made that it could also not be reliably guessed based on clinical experience. Data analysis SPSS version 23.0 software (IBM Corporation, Armonk, NY, USA) was used for analysis of descriptive characteristics. For validity, the sensitivity, specificity (both including 95% confidence intervals), LLR+ and LLR− of live vs. remote assessment were calculated per remote observer using Review Manager (RevMan) Version 5.3 (The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark) and Microsoft Excel (Microsoft Corporation, Redmond, WA, USA). Mean values over the five observers were calculated and presented. There were no missing values. For inter-rater and test-retest reliability, free marginal kappa values were calculated using the online kappa calculator at http://justusrandolph.net/kappa/ . One observer missed a second assessment for two clinical characteristics (‘infection’ in one patient, ‘slough’ in another patient). Test-retest reliability was calculated based on the observations for the remaining 49 participants for those two clinical characteristics in that observer. There were no further missing values. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
A prospective diagnostic validity and reliability study design was used. The Standards for Reporting Diagnostic accuracy studies (STARD) were used for reporting the study .
Eligible participants were adults with an existing diabetic foot ulcer who provided written informed consent. People with a cognitive deficit that would impair their ability to read and write or complete certain technical aspects of the study were excluded. Participants were recruited between July and September 2015 from four Diabetic Foot Clinics within the Metro North Hospital and Health Service and the Metro South Hospital and Health Service, Brisbane, Queensland, Australia. Eligible remote observers were five registered clinical podiatrists with different levels of experience in the management of diabetic foot ulcers who provided written informed consent. For the purposes of this study, observer experience levels in diabetic foot ulcer management were differentiated using formal Health Practitioner Level (HPL) of employment appointment and years of experience specifically treating people with diabetic foot ulcers. The five remote observers recruited were: observer 1, a ‘specialist clinician (HPL5)’ podiatrist with 13 years of diabetic foot experience; observer 2, a ‘senior clinician (HPL4)’ podiatrist with 10 years of diabetic foot experience; observer 3, a HPL4 podiatrist with 8 years of diabetic foot experience; observer 4, a HPL4 podiatrist with 6 years of diabetic foot experience; and observer 5, a newly graduated ‘clinician (HPL3)’ podiatrist with no diabetic foot experience. The observers were recruited from Diabetic Foot Clinics that were independent of the four Diabetic Foot Clinics that the participants were recruited from, and therefore, none of them were involved in the clinical care of the participants. The reference standard live clinical assessment was the criterion measure used for the purposes of this study and defined as an in-person clinical assessment by a registered clinical podiatrist with significant experience in the management of diabetic foot ulcers. A full-time Podiatry Clinical Educator in the management of diabetic foot ulcers was chosen to perform the reference standard live clinical assessments. The educator was responsible for diabetic foot ulcer education and training at the Queensland University of Technology and had been responsible for new graduate clinical support within Queensland Health in the management of diabetic foot ulcers for the past five years. Prior to this appointment, the clinical educator had been a HPL5 specialist clinical podiatrist with four years of specific diabetic foot experience. Based on the ability to find kappa values of >0.40 , with an anticipated 30–50% prevalence of the clinical characteristics, 80% power and alpha <0.05, a sample size of 50 participants was needed , which is somewhat larger compared to the samples sizes ranging from 20–36 participants in related studies – .
After providing informed consent, participants underwent the reference standard live clinical assessment. This visual assessment comprised of completing the 12 items from the study form (Table ). The items from the study form were similar to those used in the study by Bowling and colleagues , and included 9 clinical characteristic and 2 treatment decision items. One additional treatment decision item was added: “If this person wasn’t seen in the clinic, select the time-frame for when this person should be seen in-person”. This question was used to represent the standard final treatment decision of any consultation on the urgency of follow-up treatment. In this case when a clinician or patient sends mobile phone images of a diabetic foot ulcer to an expert clinician seeking urgent clinical decision advice on when they should next attend the clinic for care: the answers “same day” and “next day” were categorised as “urgent treatment by a health professional required”, with the remaining answers categorised as “no urgent treatment by a health professional required”. Additionally, pre-existing clinical information of the Queensland High Risk Foot Form (QHRFF) was used . The QHRFF is a reliable and valid research tool for foot disease with substantial and near-perfect inter-observer agreement, has been extensively described elsewhere, and is standard of care in Queensland . For the purpose of the current study, the age, gender, diabetes, co-morbidities, foot disease history (all self-reported by patients, as described in the QHRFF), most recent HbA1c, and clinical diagnoses of peripheral neuropathy (“present” or “absent”) and peripheral artery disease (“nil”, “moderate”, or “critical”) were used . Immediately following the live clinical assessment, four non-identifiable photos of the ulcer were taken by an independent research assistant using an i-phone 4 (Apple Inc, Cupertino, CA, United States of America), with image resolution 1936 × 2592 pixels. When more than one ulcer was present, the largest ulcer was selected as the target ulcer and placed in the centre of the field of view. Research assistants were provided with 1 hour training in the use of the mobile phone for taking diabetic foot ulcer images before the study, using a standard presentation and a foot model with a foot ulcer to practice. They had limited or no clinical experience in working with people with diabetic foot ulcers, to mirror the similar experience of photo taking by patients, relatives or home-care nurses without specific diabetic foot ulcer experience, and thereby to avoid potential bias from having an experienced clinician taking the photos. The four photos included: i) close-up of the ulcer ensuring that the majority of the wound is in the frame; ii) a mid-way shot, positioning the camera to capture at least a 4–6 cm border around the wound to assess status of skin and tissue integrity; iii) a distant shot showing the foot in its entirety (with the wound in view); and iv) a mid-way shot, positioning the camera to capture the opposite side of the foot from where the wound is situated to identify any significant infection or tissue quality and/or colour changes. See Fig. for an example. The mobile phones were not connected to a telecommunications network and were only used for the purposes of taking images of ulcers for this study. After all measurements had been taken, the remote observers were provided with the mobile phone images and the additional clinical information from the QHRFF. The information from the QHRFF was provided as it was thought that remote mobile phone assessment will not occur in isolation in daily clinical practice, but with patients from whom it can be expected that some baseline information would be available to the clinician. The observers were asked to complete the 12-item study form for the target ulcer (Table ) and were allowed to manipulate the touch screen to further expand the images to more accurately reflect current practice with mobile phone image use. All observers were blinded from any of the other observers’ remote assessments. Additionally, the observers were asked if the image quality allowed them to adequately assess the target ulcer, with their options ranging from 1 (strongly agree) to 5 (strongly disagree). A minimum of two weeks after their first assessment, the observers completed the same assessment of the same mobile phone images again, without having access to their previous assessment. The procedures were approved by Human Research Ethics Committee of the Prince Charles Hospital, Brisbane, Queensland, Australia (HREC/14/QPCH/204). All procedures were in accordance with the principles of the Declaration of Helsinki.
Validity of remote mobile phone assessment of diabetic foot ulcers was analysed by calculating the following diagnostic values: sensitivity, specificity, positive likelihood ratio (LLR+) and negative likelihood ratio (LLR−) between the reference standard live clinical assessment and the first assessment made by each of the individual remote observers. Although live clinical assessment can be inherently subjective, it is the internationally accepted reference standard for clinical characteristics, clinical decision-making and treatment planning , has been demonstrated to be reliable using the QHRFF and has been used in similar studies , , , . It is also the most reflective reference standard of daily practice in which remote mobile phone assessment may be used. The primary endpoints chosen were LLR+ and LLR−, as they provide the most meaningful outcome for clinical decision-making , . An LLR+ >5 or LLR− <0.2 indicated “strong” diagnostic evidence, and an LLR+ >10 or LLR− <0.1 indicated “convincing” diagnostic evidence , . Sensitivity and specificity were secondary endpoints. These values need to be “high” to either rule out or confirm a disease, but as these values also depend on prevalence no generally agreed on hard cut-off score is available , . We chose >80% for sensitivity and specificity as “high” . Reliability of remote mobile phone assessment of diabetic foot ulcers was analysed by calculating inter-observer and test-retest reliability. Inter-observer reliability was determined by calculating free marginal multirater Randolph’s kappa values , , and test-retest reliability was determined by calculating free marginal bi-rater Bennett kappa values . Free marginal kappa values were calculated because raters’ distributions of cases into categories were not restricted for any of the observations made. Values > 0.7 were considered “adequate” agreement , . Prevalence of the clinical characteristics assessed during live assessment was unknown to the observers, and the assumption was made that it could also not be reliably guessed based on clinical experience.
SPSS version 23.0 software (IBM Corporation, Armonk, NY, USA) was used for analysis of descriptive characteristics. For validity, the sensitivity, specificity (both including 95% confidence intervals), LLR+ and LLR− of live vs. remote assessment were calculated per remote observer using Review Manager (RevMan) Version 5.3 (The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark) and Microsoft Excel (Microsoft Corporation, Redmond, WA, USA). Mean values over the five observers were calculated and presented. There were no missing values. For inter-rater and test-retest reliability, free marginal kappa values were calculated using the online kappa calculator at http://justusrandolph.net/kappa/ . One observer missed a second assessment for two clinical characteristics (‘infection’ in one patient, ‘slough’ in another patient). Test-retest reliability was calculated based on the observations for the remaining 49 participants for those two clinical characteristics in that observer. There were no further missing values.
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Participants A total of 53 consecutive people with diabetes mellitus and a foot ulcer provided informed consent for participation in the study. The ulcers of three persons had healed in the short time-frame between providing informed consent and the study visit and these persons were excluded, leaving a total of 50 participants. Table displays the participant characteristics including a mean (standard deviation) age of 61 (11), 80% were male, diabetes duration of 20 (13) years, and 60% of the ulcers was located on the plantar side of the foot. Prevalence during live assessment Prevalence of seven of the nine clinical characteristics ranged from 18% to 66% during live assessment (Table ). No participant had the two remaining clinical characteristics (“tendon or bone visible” or “wet or dry gangrene”) during live assessment. These two clinical characteristics were excluded from further analyses, as diagnostic values cannot be calculated when the denominator is zero. Prevalence of the requirement for the treatment decisions of wound and peri-wound debridement were both 70% during live assessment (Table ). The prevalence for urgent treatment was determined for 44% during live assessment (Table ). Prevalence during remote assessment During remote assessment, prevalence of the seven clinical characteristics ranged from 6% to 80% (see Supplementary Table ). Prevalence of the requirement for the treatment decisions of wound debridement ranged from 62% to 98%, and for the treatment decision of peri-wound debridement from 78% to 100% over the five remote observers (see Supplementary Table ). The prevalence for urgent treatment ranged from 0% to 66% over the five remote observers (see Supplementary Table ). Validity LLR+ values ranged between 1.3 and 4.2, with no items having “strong” or “convincing” diagnostic evidence for LLR+ (Fig. ). LLR− values ranged between 0.13 and 0.88 (Fig. ), with one item having “strong” diagnostic evidence for LLR− (treatment decision of peri-wound debridement; LLR−: 0.13). The remaining LLR− values ranged between 0.33 and 0.88 (Fig. ), not resulting in strong diagnostic evidence , . Sensitivity findings ranged from 32% to 97% (Fig. ), with four items having “high” sensitivity (two clinical characteristics: “granulation tissue” and “moist or exuding wound”; two treatment decisions: “wound debridement” and “peri-wound debridement”). Specificity findings ranged from 20% to 87% (Fig. ), with one item having “high” specificity (clinical characteristic: “tracking or tunnelling wound”). None of the items recorded both “high” sensitivity and specificity. All absolute agreement, sensitivity and specificity values (including 95% confidence intervals) per observer are given in Supplementary Table for further information. Mean values per observer did not show major differences between observers (see Supplementary Table ). Also, the most experienced observer scored similar to the least experienced observer (see Supplementary Table ). Reliability Inter-observer reliability kappa values ranged from 0.09 to 0.71, with only the treatment decision item of peri-wound debridement reaching “adequate agreement” (Table ). Mean test-retest reliability kappa values of the five observers ranged from 0.45 to 0.86, with again peri-wound debridement the only item reaching “adequate agreement” (Table ). Individual test-retest reliability scores per observer are presented in Fig. . All observers scored >0.7 for peri-wound debridement, four out of five observers scored >0.7 for the clinical characteristic of granulation tissue. None of the other items resulted in three or more observers with scores >0.7. Image quality Mean image quality rating on a scale of 1–5, with lower scores reflecting higher quality ratings, was 2.4 (standard deviation 0.3; range 1.8–3.0). Inter-observer reliability kappa on image quality assessment was 0.25, indicating limited agreement between observers on image quality.
A total of 53 consecutive people with diabetes mellitus and a foot ulcer provided informed consent for participation in the study. The ulcers of three persons had healed in the short time-frame between providing informed consent and the study visit and these persons were excluded, leaving a total of 50 participants. Table displays the participant characteristics including a mean (standard deviation) age of 61 (11), 80% were male, diabetes duration of 20 (13) years, and 60% of the ulcers was located on the plantar side of the foot.
Prevalence of seven of the nine clinical characteristics ranged from 18% to 66% during live assessment (Table ). No participant had the two remaining clinical characteristics (“tendon or bone visible” or “wet or dry gangrene”) during live assessment. These two clinical characteristics were excluded from further analyses, as diagnostic values cannot be calculated when the denominator is zero. Prevalence of the requirement for the treatment decisions of wound and peri-wound debridement were both 70% during live assessment (Table ). The prevalence for urgent treatment was determined for 44% during live assessment (Table ).
During remote assessment, prevalence of the seven clinical characteristics ranged from 6% to 80% (see Supplementary Table ). Prevalence of the requirement for the treatment decisions of wound debridement ranged from 62% to 98%, and for the treatment decision of peri-wound debridement from 78% to 100% over the five remote observers (see Supplementary Table ). The prevalence for urgent treatment ranged from 0% to 66% over the five remote observers (see Supplementary Table ).
LLR+ values ranged between 1.3 and 4.2, with no items having “strong” or “convincing” diagnostic evidence for LLR+ (Fig. ). LLR− values ranged between 0.13 and 0.88 (Fig. ), with one item having “strong” diagnostic evidence for LLR− (treatment decision of peri-wound debridement; LLR−: 0.13). The remaining LLR− values ranged between 0.33 and 0.88 (Fig. ), not resulting in strong diagnostic evidence , . Sensitivity findings ranged from 32% to 97% (Fig. ), with four items having “high” sensitivity (two clinical characteristics: “granulation tissue” and “moist or exuding wound”; two treatment decisions: “wound debridement” and “peri-wound debridement”). Specificity findings ranged from 20% to 87% (Fig. ), with one item having “high” specificity (clinical characteristic: “tracking or tunnelling wound”). None of the items recorded both “high” sensitivity and specificity. All absolute agreement, sensitivity and specificity values (including 95% confidence intervals) per observer are given in Supplementary Table for further information. Mean values per observer did not show major differences between observers (see Supplementary Table ). Also, the most experienced observer scored similar to the least experienced observer (see Supplementary Table ).
Inter-observer reliability kappa values ranged from 0.09 to 0.71, with only the treatment decision item of peri-wound debridement reaching “adequate agreement” (Table ). Mean test-retest reliability kappa values of the five observers ranged from 0.45 to 0.86, with again peri-wound debridement the only item reaching “adequate agreement” (Table ). Individual test-retest reliability scores per observer are presented in Fig. . All observers scored >0.7 for peri-wound debridement, four out of five observers scored >0.7 for the clinical characteristic of granulation tissue. None of the other items resulted in three or more observers with scores >0.7.
Mean image quality rating on a scale of 1–5, with lower scores reflecting higher quality ratings, was 2.4 (standard deviation 0.3; range 1.8–3.0). Inter-observer reliability kappa on image quality assessment was 0.25, indicating limited agreement between observers on image quality.
We comprehensively investigated the validity and reliability of remote diabetic foot ulcer assessment of clinical characteristics and treatment decisions using mobile phone images for the first time. With the exception of the treatment decision of peri-wound debridement, no other item resulted in strong validity or adequate reliability. This indicates that mobile phone images should not be used as stand-alone diagnostic instrument for remote assessment of diabetic foot ulcers. This is the first study to investigate both validity and reliability of mobile phone images for remote diabetic foot ulcer assessment. One earlier study investigated clinical assessment of diabetic foot ulcers using mobile phone images , but they reported kappa values for validity rather than the technically correct statistics for validity of likelihood ratio, sensitivity or specificity , . However, interestingly they also reported overall low values for clinical characteristics using kappa statistics , as we did for clinical characteristics and treatment decisions using likelihood ratio, sensitivity and specificity. The scarce investigations using more advanced wound assessment platform cameras also resulted in low validity for assessment of the same detailed clinical characteristics that we investigated with mobile phone images – . This confirms that images should not be used as a stand-alone instrument for diagnosis of detailed clinical characteristics or treatment decisions in people with diabetic foot ulcers. Clinicians who use mobile phone images in daily clinical practice should obtain as much additional information as possible when making treatment decisions based on these images, and be cautious of the low diagnostic accuracy. Evidence-based clinical assessment by a trained health professional commonly includes assessment of presence of peripheral neuropathy, peripheral artery disease, infection, wound size and depth, and, if available, results from radiological or microbiological assessments , . When combined with such additional clinical information, mobile phone images become part of a more comprehensive telemedicine system. Two clinical trials have used digital images as part of such a comprehensive telemedicine system to improve treatment decisions for diabetic foot ulcers , . In one study, a telemedicine system using digital images was added to standard clinical care to help improve treatment decisions for patients in remote Australian clinical sites . In the other study, intervention patients received two home treatments using digital images transferred via a telemedicine system to an expert clinic for advice along with one outpatient clinic treatment, compared to standard care patients receiving three outpatient clinic treatments . Despite the shortcoming in diagnostic accuracy of digital images, these studies reported improved or similar outcomes to standard clinical care. However, as the authors highlight, this was likely the result of how they used the images as part of an extensive communication platform between trained nurses and specialised diabetic foot clinicians , , rather than using the images as the sole diagnostic modality , . The importance of using an extensive communication platform approach with trained clinicians at both ends of the telemedicine system was further highlighted by a recent telemedicine study that had to be prematurely concluded as clinicians were not confident recruiting patients for the study as it did not have an extensive communication platform approach available . These collective findings indicate that if telemedicine approaches are to be truly effective in diabetic foot ulcer care, and for them to also facilitate improved patient self-care, then telemedicine approaches need images or systems with better diagnostic accuracy and extensive communication platforms between the expert clinic and the remote clinicians or patients that includes additional clinical information. To improve diagnostic accuracy, other methods are needed to complement the digital images. Some complementary methods have already been described in more advanced systems. For example, infra-red temperature measurement in combination with a digital image holds promise, both to improve diagnosis of infection , as well as to determine urgency of treatment . Small infrared cameras compatible with mobile phones are now on the market, and these have shown adequate quality for diabetic foot ulcer imaging . Another solution to improve the diagnostic accuracy of mobile phone images is to use computerized machine-learning algorithms. Artificial intelligence systems have recently been found to have similar diagnostic accuracy in identifying three types of skin cancers compared to highly trained dermatologists, by making use of computerized machine-learning algorithms . However, the network used 129,450 images to train itself; such a database with reliably annotated diabetic foot ulcers is currently not available. Further, the different clinical characteristics diagnosed that are important for diabetic foot ulcers might present with greater variation between patients and could be harder to detect than the three skin cancer types . With continuously increasing computer power and better availability of diabetic foot ulcer images, this is an area worthy of future research. Other methods to improve diagnostic accuracy that could be considered are training of assessors and improving image quality and resolution. Studying the effects of additional training may improve diagnostic accuracy, for example the current dataset could be used as a training set, and newly acquired images following training as a test set. However, we did not find any difference in accuracy between experienced and newly-graduated observers, so the effect of training might be limited. All photos in this study were taken with an i-Phone 4. Newer mobile phone models with better cameras may result in images of higher-quality. Assessment of such images might result in better diagnostic accuracy, but this could not be investigated in the current study. With the limited diagnostic accuracy found in studies using more advanced cameras , , , it is unlikely that better mobile phone cameras will greatly improve diagnostic accuracy. Also, it cannot be expected that patients will have the newest mobile phone models, and as such the current method was better reflective of daily practice. In our opinion, it would be more relevant to investigate methods that can complement mobile phone images in any future studies, rather than investigating images of higher resolution. Strengths of our study included involving multiple remote observers with different representative levels of experience in diabetic foot ulcer management. We further mimicked remote assessment similar to daily clinical practice, with inexperienced research assistants taking the photos (reflective of inexperienced patients, carers or home care nurses) and remote observers not involved in the care of the participants included. This is likely the practical daily clinical situation in countries with vast geographical distances and limited specialised interdisciplinary teams, such as Australia, Norway or Canada. Finally, the participants were representative of the target population, with the majority being male, mean age around 61 years, mean average diabetes duration of 20 years, half of the ulcers neuroischemic and 60% located on the plantar side of the foot . Limitations of our study included the use of live clinical assessment as reference standard. Even though it is the accepted reference standard , live assessment agreement may vary between observers. However, two recent studies (of which one in the same region as the current study), showed adequate to near perfect inter-observer agreement for live assessment , . The intra-class correlations found in those studies were much higher than what we found for remote assessment of digital images. Another limitation was the lack of information available on ulcer size, depth and duration for the remote observers. It was decided not to include this information from the QHRFF to not bias the observers, and such information would also not always be available in clinical practice with patients taking photos at their home. Future studies might investigate whether availability of such information improves observer agreement. Finally, some variation was seen in image quality in our study, but most observers perceived that the image quality allowed them to adequately assess the ulcer. With the minor differences in quality, a relevant comparison between images with a higher quality and images with a lower quality was not possible. It is important for these negative outcomes to be reported, as mobile phone images are, in our experience, already widely used in daily clinical practice for the assessment of diabetic foot ulcers and wounds in general. Mobile phone images are often used in addition to verbal descriptions of diabetic foot ulcers when a patient, carer or home care nurse seeks remote assistance from a specialized team. And even though these images may tell more than the words used to describe the ulcer, the low diagnostic values found for both diagnosis of clinical characteristics and for treatment decisions are an important warning that caution is needed when clinicians remotely assess such images.
With their low validity and reliability, mobile phone images should not be used as a stand-alone diagnostic instrument for remote assessment of diabetic foot ulcers. Clinicians who use mobile phone images in daily clinical practice should obtain as much additional information as possible when making treatment decisions based on these images, and be cautious of the low diagnostic accuracy. Additional methods may improve the diagnostic accuracy, but these need to be developed further before they can be used in daily clinical practice.
Supplementary
|
Requirements, expectations, challenges and opportunities associated with training the next generation of pharmacometricians | bdbce122-8902-4bfd-9be1-c38690538ab3 | 10349183 | Pharmacology[mh] | Pharmacometrics has evolved from a descriptive science to an applied science that is increasingly used in all phases of drug development over the last decades. Today's application for accelerating and streamlining drug development is referred to as model‐informed drug development (MIDD) or, more broadly, model‐informed drug discovery and development (MID3). , Due to the increasing application of MID3 approaches, rapid emergence of new data analysis and computational methods as well as increasing complexity of drug development and regulatory evaluation processes, demands for and toward pharmacometricians have been evolving over the years as well. , , It is no longer sufficient to master a certain tool or technical skill. Instead, these skills need to be applied in a team‐based environment to solve a drug development problem. Strong technical skills and the ability to identify when pharmacometrics analyses can be used to answer a particular question are of course the foundation for every pharmacometrician. In addition to these foundational technical skills, it is our firm belief that a successful pharmacometrician should ideally: (1) be an effective communicator, (2) be able to think strategically, and (3) be able to influence team‐based decisions, as outlined in Figure . Scientific knowledge and technical skills A solid foundation of scientific knowledge (e.g., basic pharmacokinetic/pharmacodynamic [PK/PD] and pharmacology concepts) and technical pharmacometrics skills is the basis for being able to successfully apply MID3 approaches. Foundational technical skills include but are not limited to nonlinear mixed effects (NLME) PK/PD modeling, mechanistic PK/PD modeling, including physiologically‐based pharmacokinetic (PBPK), quantitative systems pharmacology (QSP) modeling and clinical trial simulations, as shown in Table . We believe that a strong technical skill set comprises familiarity with pharmacometric methods, software, and programming language(s), the ability to critically assess the scientific validity of a model and its associated parameter values, as well as knowledge in pathophysiology, PKs, pharmacology, toxicology, statistics, and mathematics. These foundational technical skills are essential to influence decision‐making in drug discovery and development as well as during regulatory review. With the increased use of MID3 approaches in drug discovery and early drug development, familiarity with emerging sciences and technologies, such as machine learning, artificial intelligence, and models relating structural properties of chemical compounds to potency/safety/PK, are becoming more important. As a consequence, the spectrum of required technical pharmacometrics skill sets has widened even further and two questions arise. To which extent do pharmacometricians need to cover the entire width of the spectrum? Is there a need for specialization among pharmacometricians? Effective written and verbal communication Effective written and verbal communication is a generic skill set that is currently expected from professionals in almost any area. It includes both active communication and active listening. The latter is particularly important in team‐based environments, where it helps to anticipate and resolve conflicts, negotiate solutions, and seek compromises. However, effective communication of MID3 approaches and results requires an additional layer of attention. , Rather than focusing on technical details, pharmacometricians who are effective communicators focus on the context of use, tailor the wording and complexity of their message toward the audience, and help teams that are largely composed of non‐pharmacometricians coalesce on answers to some key questions: What are the key strategic objectives/questions? What information and data need to be collected to be able to answer these questions and how can a modeling approach help? What do we know thus far with how much certainty and what assumptions are being made, either in general or by the model? What are the decision criteria (both quantitatively and qualitatively) and what is the impact/risk of the decision? What do the model‐based simulations suggest in terms of efficacy, safety, and potential next steps or future studies? Strategic thinking Strategic thinking entails the ability to anticipate both challenges and opportunities and to plan a course of action accordingly. Strategic thinking in the context of drug development requires a thorough understanding of the drug development process, applicable regulations and guidelines, and appreciation of organizational constraints (cost, time, value/risk). Obtaining this understanding requires time and is often the reason why junior pharmacometricians have difficulty leveraging MID3 approaches to streamline and accelerate drug development. Exposing junior pharmacometricians to drug development problems already during their training program (e.g., during internships in industry/regulatory agencies or joint research projects with industry/regulatory agencies) is consequently important. On the other hand, MID3 approaches provide an opportunity to facilitate strategic thinking because they allow for the integration of complex knowledge from multiple sources, explore different scenarios by quantifying assumptions, and prospectively choose the one that best meets the organization's goal (e.g., most pragmatic or has the highest probability of success). Influencing decision making The combination of technical and strategic skills ensures that pharmacometricians can identify critical questions in drug development programs that may be answered using MID3 approaches. Strategic skills as well as the ability to influence and negotiate are essential for identifying critical questions within and across project teams, provide teams with an option for decision making based on modeling and simulation, and conveying the solutions in a manner that engages the audience. The ability to systematically integrate information and extrapolate beyond what has already been studied holds great potential for influencing decisions in drug development and regulatory approval. To broaden the impact that pharmacometricians can have on a final decision, it is important for her/him to be involved in all phases of drug discovery and development, including prospective study design, execution of the study, analyzing the data, and performing simulations for the next trial(s) along with making go/no‐go decisions. Establishing this mindset early on in focused teaching and research curricula that integrate drug discovery and development with pharmacometrics is consequently beneficial. At the same time, it is important to remember that decision making is a complex process, which is affected by evidence, beliefs, assumptions and bias, a combination recurrent in common judgments. Biases in judgments reveal heuristics in our thinking under uncertainty, which can lead to severe and systematic errors. Decision making is also impacted by the way a scenario is framed. For example, a 90% chance of success would likely be perceived more favorably than a 10% risk of failure, although they are mathematically the same. We believe that pharmacometricians must understand this interplay and continuously educate themselves on how to maximize the impact of MID3 with the overall goal in mind (i.e., to accelerate and streamline drug development and ultimately improve patient care). At the same time, they must be willing to make trade‐offs, when necessary (i.e., be adaptive and pragmatic to achieve consensus amongst team members). A solid foundation of scientific knowledge (e.g., basic pharmacokinetic/pharmacodynamic [PK/PD] and pharmacology concepts) and technical pharmacometrics skills is the basis for being able to successfully apply MID3 approaches. Foundational technical skills include but are not limited to nonlinear mixed effects (NLME) PK/PD modeling, mechanistic PK/PD modeling, including physiologically‐based pharmacokinetic (PBPK), quantitative systems pharmacology (QSP) modeling and clinical trial simulations, as shown in Table . We believe that a strong technical skill set comprises familiarity with pharmacometric methods, software, and programming language(s), the ability to critically assess the scientific validity of a model and its associated parameter values, as well as knowledge in pathophysiology, PKs, pharmacology, toxicology, statistics, and mathematics. These foundational technical skills are essential to influence decision‐making in drug discovery and development as well as during regulatory review. With the increased use of MID3 approaches in drug discovery and early drug development, familiarity with emerging sciences and technologies, such as machine learning, artificial intelligence, and models relating structural properties of chemical compounds to potency/safety/PK, are becoming more important. As a consequence, the spectrum of required technical pharmacometrics skill sets has widened even further and two questions arise. To which extent do pharmacometricians need to cover the entire width of the spectrum? Is there a need for specialization among pharmacometricians? Effective written and verbal communication is a generic skill set that is currently expected from professionals in almost any area. It includes both active communication and active listening. The latter is particularly important in team‐based environments, where it helps to anticipate and resolve conflicts, negotiate solutions, and seek compromises. However, effective communication of MID3 approaches and results requires an additional layer of attention. , Rather than focusing on technical details, pharmacometricians who are effective communicators focus on the context of use, tailor the wording and complexity of their message toward the audience, and help teams that are largely composed of non‐pharmacometricians coalesce on answers to some key questions: What are the key strategic objectives/questions? What information and data need to be collected to be able to answer these questions and how can a modeling approach help? What do we know thus far with how much certainty and what assumptions are being made, either in general or by the model? What are the decision criteria (both quantitatively and qualitatively) and what is the impact/risk of the decision? What do the model‐based simulations suggest in terms of efficacy, safety, and potential next steps or future studies? Strategic thinking entails the ability to anticipate both challenges and opportunities and to plan a course of action accordingly. Strategic thinking in the context of drug development requires a thorough understanding of the drug development process, applicable regulations and guidelines, and appreciation of organizational constraints (cost, time, value/risk). Obtaining this understanding requires time and is often the reason why junior pharmacometricians have difficulty leveraging MID3 approaches to streamline and accelerate drug development. Exposing junior pharmacometricians to drug development problems already during their training program (e.g., during internships in industry/regulatory agencies or joint research projects with industry/regulatory agencies) is consequently important. On the other hand, MID3 approaches provide an opportunity to facilitate strategic thinking because they allow for the integration of complex knowledge from multiple sources, explore different scenarios by quantifying assumptions, and prospectively choose the one that best meets the organization's goal (e.g., most pragmatic or has the highest probability of success). The combination of technical and strategic skills ensures that pharmacometricians can identify critical questions in drug development programs that may be answered using MID3 approaches. Strategic skills as well as the ability to influence and negotiate are essential for identifying critical questions within and across project teams, provide teams with an option for decision making based on modeling and simulation, and conveying the solutions in a manner that engages the audience. The ability to systematically integrate information and extrapolate beyond what has already been studied holds great potential for influencing decisions in drug development and regulatory approval. To broaden the impact that pharmacometricians can have on a final decision, it is important for her/him to be involved in all phases of drug discovery and development, including prospective study design, execution of the study, analyzing the data, and performing simulations for the next trial(s) along with making go/no‐go decisions. Establishing this mindset early on in focused teaching and research curricula that integrate drug discovery and development with pharmacometrics is consequently beneficial. At the same time, it is important to remember that decision making is a complex process, which is affected by evidence, beliefs, assumptions and bias, a combination recurrent in common judgments. Biases in judgments reveal heuristics in our thinking under uncertainty, which can lead to severe and systematic errors. Decision making is also impacted by the way a scenario is framed. For example, a 90% chance of success would likely be perceived more favorably than a 10% risk of failure, although they are mathematically the same. We believe that pharmacometricians must understand this interplay and continuously educate themselves on how to maximize the impact of MID3 with the overall goal in mind (i.e., to accelerate and streamline drug development and ultimately improve patient care). At the same time, they must be willing to make trade‐offs, when necessary (i.e., be adaptive and pragmatic to achieve consensus amongst team members). On top of the general increase in demand for pharmacometricians, there is an imbalance among supply, demand, and professional working opportunities between different regions of the world, resulting in geographic and academic brain drain. Particularly the latter poses an imminent threat to the pharmacometrics community because if this trend continues, we will soon reach a point where we will no longer have a sufficient number of academicians, particularly at the Associate and Full Professor level, that are able to train next generation pharmacometricians. To overcome these challenges, a general rethinking of traditional, siloed “business models” toward joint efforts between academia, industry, and regulatory agencies will be required. These efforts can be established at various levels, ranging from loose affiliations, such as adjunct appointments or internship opportunities for students and trainees, to structured partnerships with a dedicated logistic, financial, educational, and research infrastructure support. The latter would allow to overcome limitations of individual stakeholders (e.g., limited time for developing concepts or platform models outside the direct drug development pipeline or teaching drug development without having worked in the industry) and provide planning security (e.g., proactive workforce pipeline development, PhD and postdoctoral support for the duration of the training program, or increased utilization of large‐scale databases for disease platform model development) for all parties involved. These joint efforts would also allow for the development of applied training modules, where stakeholders bring their individual strengths to the table (i.e., concepts and hands‐on software training [academia], drug development context and possibly data [industry], regulatory context [regulators]). Combining forces would also allow us to stay abreast with the rapidly evolving drug development and regulatory evaluation landscape and offer training for new modalities, concepts, and analysis approaches in a timely fashion. Ideally, these partnerships would be interdisciplinary in nature to enable a broader vision to problems and ultimately spark innovation by crossing traditional knowledge boundaries. A transdisciplinary approach that integrates, for example, PBPK, machine learning, and artificial intelligence or pharmacometrics and pharmacoepidemiology, would also further a mindset of constant learning and collaboration, which is key to success in team‐based environments. To facilitate the interactions, we collectively composed a list of proposed teaching and training activities needed for developing technical, strategic, as well as communication and influencing skills (Table ). We recognize that this list, although too lengthy for any single PhD or postdoctoral fellowship program, is not all‐encompassing and that the outlined activities should be tailored toward the individual trainee's educational background and working experience. We also recognize that training activities in academia may have to be complemented by downstream activities. For example, two‐way sabbaticals may allow working professionals from industry or regulatory agencies to retool in academia, whereas academicians could stay abreast with latest advances in drug discovery, development, and regulatory evaluation while spending time in the industry or at the agency. Finally, we do not intend to infringe on individual faculty's freedom to train their students as they see fit, dismiss previous curricula proposal, or suggest that academia should take sole responsibility for the proposed teaching and training activities. We rather intend to use this proposal to spark a broader conversation among stakeholders in the pharmacometrics arena to collectively build consensus on key skills and outline viable avenues for how to best develop them. As such, we invite all stakeholders to join this conversation and welcome any constructive feedback on our proposal. No funding was received for this work. The authors declared no competing interests for this work. |
Recent Progress in Visualization and Analysis of Fingerprint Level 3 Features | 2393c7cc-9b04-4f6b-b6a6-ae1676c10305 | 9630047 | Forensic Medicine[mh] | Introduction Fingerprints refer to patterns on fingertips with friction ridges and recessed furrows being regularly arranged. They have been regarded as one of the most valuable and solid evidence in court due to their uniqueness, immutability and permanence.[ , , ] Fingerprints carry sufficient and reliable discriminative characteristics which ensure the acceptance of fingerprint comparison as a valid individualization method. Generally, fingerprint characteristics are classified into three dimensions, namely level 1, level 2 and level 3 features (Figure ). Specifically, level 1 features include the macro pattern types and ridge flows, such as loop, whorl, arch and accidental. Level 2 features give details at a deeper scale, termed Galton characteristics or minutiae points (ridge ending, enclosure, bifurcation, hook, eye, etc.). Level 3 features contain all microscopic attribute dimensions of ridges, pores, incipient ridges, warts, creases, scars, etc. Current fingerprint technology has been developed primarily focused on the first‐level and second‐level features. As we know, 6–17 minutiae (varying from country to country) guarantee the success of fingerprint recognition. Nevertheless, it is not always satisfactory to process fingerprints by only employing patterns and minutiae points. The main reason lies in that fragmentary or deformed fingerprints are frequently met at crime scenes. When comparing these problematic fingerprints against the prints in a database, their insufficient characteristics may cause fingerprint mismatch and thus reduce the discriminatory power. Moreover, fingerprints found in practice are often invisible, which are called latent fingerprints (LFPs) and needed to be visualized before conducting recognition. It has to be pointed out, conventional fingerprint treatments may cover details and even result in pseudo characteristics, that decrease the identification accuracy. In addition, fingerprints and their level 1–2 details can be easily faked by molding methods or inkjet printing methods.[ , , ] Thus, spoof and real samples are unable to be discriminated through the minutiae‐based fingerprint matching system. Apart from level 1–2 features, level 3 fingerprint features are also permanent, immutable and unique. Back in 1912, Locard proved that 20–40 pores are enough to give a personal identification opinion. From then on, the third‐level‐feature based algorithms have been proposed and improved the performance of the recognition system to some extent. Jain et al. reported that the error matching rate declined by 20 % after level 3 features were combined with level 1–2 features. Recent studies indicated the third‐level features are useful for obtaining additional information about donor gender, age, race, health, etc. than just individualization. Thus, level 3 details have the potential to offer a new strategy for problematic fingerprint (incomplete, deformed, or forged) recognition and even donor profiling. Unfortunately, the actual usage of level 3 details accounts for less than 1 %. Investigating the reason, it is mainly that the current visualization reagents for LFPs or deposition methods cannot well display the third‐level structures. Another is that the fingermarks left at crime scenes usually have poor quality, whose level 3 features are insufficient for the following identification procedure. Besides, fingerprint images are routinely captured at the resolution of 500 pixels per inch (ppi) which cannot meet the standards (≥1000 ppi) of third‐level feature extraction. Last but not least, no systematic analytical methods for level 3 features have been established at home and abroad. Although high‐resolution (≥1000 ppi) fingerprint imaging techniques have driven the growth of third‐level‐feature based algorithms, there still exist some challenging issues for improving the comparison accuracy. The urgent demands for introducing level 3 features into fingerprint recognition and donor profiling have attracted not only forensic experts but also researchers from other fields. To date, considerable efforts have been devoted to detecting and analyzing the third‐level details. Therefore, it is necessary to give an overview of the recent advances in level 3 details with an emphasis on their reliability assessment, visualization methods as well as the potentiality for individualization, donor profiling and even other application scenarios. Specifically, four main sections are organized in this minireview. The first part provides a general description of the level 3 feature types and the fundamental studies on their quality and reliability. The second section introduces the multivariate techniques for detecting third‐level features involving physical interaction methods, residue‐responsive reagents, electrochemical techniques and mass spectrometry (MS) methods. The third part illustrates the application potentiality of level 3 characteristics, particularly in personal identification, donor profiling, fingerprint age determination, spoof fingerprint differentiation and even disease diagnosis. In the last part, the future directions of level 3 details detection and analysis are also outlined followed by a summary.
Reliability of Level 3 Features Perception varies widely about which details fall into the level 3 categories. Adopting such a view that level 3 features are everything except the fingerprint flows, patterns and minutiae points, incipient ridges, warts, creases and scars are considered as the third‐level characteristics. However, Champod holds that they should be ascribed to level 2 features because they don't require further magnification to be recognized. Actually, level 3 features involve all microdimensional attributes of a ridge. Under this perspective, the incipient ridges, creases, and scars belong to level 3 features only when we focus on their microscopic details such as size, shape, length, width, angle, etc. Beyond the above controversial features, the ridge contour and width (termed ridgeoscopy), as well as pore shape, size, location, frequency and interspace (termed poroscopy), are also included in the third‐level features. Note that only employing level 2 details is incapable of problematic fingerprint comparison (fingerprints with low quality or spoof fingerprints), many researchers turn to explore the evidentiary power of level 3 details. Nonetheless, level 3 details are easily affected by multiple factors, such as the physical conditions of donors, deposition conditions, storage circumstances, etc. Hence, it is essential to clarify the reliability of level 3 features under various conditions. Generally speaking, reliability can be assessed by reproducibility and persistency, that is, whether level 3 details can be reproduced in several depositions or over a time interval. Given that poroscopy and ridgeoscopy have been broadly discussed in many publications, we primarily introduce the reproducibility and persistency of the above two features followed by a detailed summary table (Table ). 2.1 Sweat pores Sweat pores, distributed along the papillary ridges, are formed by the duct traveling from the dermis to the epidermis. Locard claimed the pores are permanent and vary from one person to another. In general, sweat pore features consist of pore size, shape, location, distribution, frequency and pore‐to‐pore interspace. Figure shows the schematic measurements of the sweat pore parameters which are commonly applied in current research. Various attempts have been made to ascertain the reproducibility and persistency of sweat pores under different conditions. The shape of pores can be square, triangle, round, oval or irregular. It should be noted that the pore shape is usually measured by pore size or pore area. The pore size is commonly 50–265 μm in diameter. Its observed size depends on deposition or detection methods, deposition pressures, perspiration activity and fingerprint donors, etc. Ashbaugh suggested that the pore area wasn't reliable for individualization with no evidence to support his assertion. One study has explored the influence of different detection methods on pore area. It advocated the pore area was unchanged in high‐quality inked prints while latent and livescan prints didn′t accurately reproduce the pore area. On the contrary, the research studied by Sutton et al. showed that the pore area of inked fingerprints was not reliable, independent of the deposition substrate. The group also found the parameter was variable in fingerprints developed using cyanoacrylate or ninhydrin methods. Fu et al. further indicated the pore area of inked fingerprints varied when different ink quantities or deposition pressures are applied. Specifically, the pore size decreased with the ink amount or deposition pressure increasing. The above results demonstrated the ink or conventional visualization methods contributed a lot to the variability of the pore area or size. As the direct microscopic imaging alternative could avoid the deposition effect and physical uncertainty, Sutton's team observed the pore area through direct fingerprint photographs and found the day had a significant impact on pore area measurement. Concretely, the pore area was reproducible over one hour but not for one month. In 2011, Oklevski found the pore size of inked fingerprint samples changed over a dactyloscopy time interval of 48 years, which strengthened the unreliability of the pore area parameter. Cao et al. indicated dynamic changing was individual‐dependent and occurred to the sweat pore size along with the epidermal replacement (over 28 days). Dhall et al. later proved the irreproducibility of the pore area over ten consecutive days. Additionally, better pore quality was achieved on the sticky side of adhesive tape than on glass substrates. Recently, Zhou et al. argued that although the collection period affected the pore size, the variation was far less obvious than the changes caused by deposition or detection methods and pressures. Our recent work, published in 2021, also drew the same conclusion that the pore area was subject to high variability in different depositions. Pore frequency is another feature that fascinated researchers. Locard discovered the pore number may vary from 9–18 pores/cm ridge. Another statistical analysis showed the pore density was 419–519 pores/cm 2 . Gupta's group validated that the pore frequency in the periphery position of fingerprints had a significant correlation in the Index and Ring fingers. The pore impressions may be open in one and closed in the other due to the difference in sweat glands secreting activity, deposition pressure, detection or capture methods, etc. One previous study suggested two inked impressions printed by the same finger displayed a large disparity in pore density. The reason was explained by Luo et al. and ink deposition can′t well reflect the sweat pore number, especially for thick ink printings or donors with small pores. Fu et al. proposed deposition pressure was another factor in that a pore would undergo distortion and stretch to occupy the openings if applied pressure. They found the pore number was well‐reflected under low pressure and decreased with pressure increasing. Monson et al. systematically assessed the reproducibility of level 3 details over time by considering the influence of capture methods. In detail, the direct photographs presented the pores whose frequency did not vary even for a ten‐year interval, while for holographic or ink rolled impressions, the pore seemed to be obscured over a one‐month observation. Additionally, livescan methods failed to display the same level details captured by the other methods, particularly the third level. Singh et al. proposed that the detected pore frequency differed considerably, which depended upon the substrate types that LFPs were deposited on, the enhancement methods that were used for processing LFPs, etc. Besides, the detected number of pores had consistencies with that of the minutiae. Interestingly, livescan prints obtained every hour within eight hours indicated sweat pores didn't periodically close and open. The hypothesis was opposite to previous findings that the closed pore at one stage has been found open at another time point. They proposed the main reason was perhaps not only due to pores’ physiology but owning to ink and pressure as well. However, additional experiment data should be presented to draw such a conclusion. The observation interval (one hour) may be too long for pore activity. In other words, the pore activity was unknown during the non‐observation period, where sweat pores may periodically close and open. Hence, the pores of live scan prints should be observed within a short interval or real‐time monitored. We recently reported the number of sweat pores was consistent with that of the live fingertip and further confirmed the theory of Singh et al. that the presented pore number was the same in several depositions when we eliminated the effects of ink and pressures. The position of pores was also extensively investigated and inspired high hopes for individualization. It refers to not only the relative location on the friction ridge but also pore‐to‐pore location, distance, as well as the shape they form together. The pores of inked fingerprints were reported to retain their spatial position relative to one another over 48 years. Luo et al. further emphasized the pore location remained relatively stable with the pressures employed at 200 g, 600 g and 1000 g. Monson et al. validated the pore location captured by direct photographs kept unchanged even for a ten‐year interval. The low effect of substrates and development methods on relative pore location was also detected by Singh's group. Zhou et al. subsequently published an article about the reproducibility of pore‐to‐pore distance and angle over 21 years. Pore‐to‐pore angle gave an excellent reflection on the location of pore groups and was then proved to be more stable than interspace. Our group compared the frequency distribution of the distance between adjacent sweat pores in three independent depositions, whose results were consistent with earlier research. Very recently, Dhall's team also suggested the pore inter‐distance and angle were found to be reliable and reproducible on glass and adhesive tape substrates. Nevertheless, in the year 2020, Wang et al. discussed the pore location drift over one month. The experiment results demonstrated the pore location observed in either the direct microscopic photography or ink impression suffered the shift in both the longitudinal and transverse directions (maximum up to 166.46 μm, 61.00 μm, respectively). Furthermore, the relative pore location on the friction ridge was found susceptible to the deposition pressure and secreting activity in Cao‘s work. 2.2 Ridge edges, widths and incipient ridges The level 3 details often have been limited to the consideration of pores, whereas it can be broadened to include shapes of ridge edges, ridge width as well as incipient ridges. The shape of ridge edges is classified into seven types (straight, convex, peak, table, pocket, concave and angle) by Chatterjee. The diverse shape types are formed by the differential growth of the ridge units and the pores near the ridge edge. After Oklevski examined the 100 pairs of inked impressions, it was detected that the edge feature number decreasing with the capture interval time increasing. The researchers believed the susceptibility of the ridge edges to deformation and damage could account for this observation. Meanwhile, the decline of edge feature quality occurred, where the concave edge features showed the greatest stability. Our findings indicated the ridge shape was well retained on nitrocellulose (NC) membranes with a constant deposition pressure of about 250 g. Another level 3 parameter involving ridges is supposed to be ridge width which was commonly 200–500 μm. Like edge shape, it was extremely vulnerable to deposition pressures and would become widen if the applied pressing force increased. When the pressure was applied at less than 300 g, ridge width increased significantly, but slowly when pressure was over 300 g. Besides, the variation of ridge width could be negligible by keeping constant press force. Without a doubt, width variation is also possibly attributable to physiological occurrences such as weight gain/loss, usage, gouty deformation, or age. Incipient ridges located at furrow regions, are generally thinner and lower than papillary ridges and may not be detected in fingerprint impressions. Moreover, they rarely bifurcate and rarely contain pores. Stücker et al. reported that older people (>20 years old) demonstrated a higher frequency in incipient ridges than the younger group (<20 years old). In the study published by Silva, the number of incipient ridges increased with age among males. Different from that observed in males, there was a reduction in the number of incipient ridges among older females. Wentworth et al. found no variation was detected in incipient ridges by observing a child's inked impressions collected every two years within ten years. Conversely, Monson et al. obtained a conclusion that incipient ridges of both live fingertips and ink impressions are variable even in a two‐month interval. In the year 2013, Fu's group proved the pressure force played role in the reproducibility of incipient ridges. The incipient ridges were widening, distorted and even disappeared when applied excess deposition force.
Sweat pores Sweat pores, distributed along the papillary ridges, are formed by the duct traveling from the dermis to the epidermis. Locard claimed the pores are permanent and vary from one person to another. In general, sweat pore features consist of pore size, shape, location, distribution, frequency and pore‐to‐pore interspace. Figure shows the schematic measurements of the sweat pore parameters which are commonly applied in current research. Various attempts have been made to ascertain the reproducibility and persistency of sweat pores under different conditions. The shape of pores can be square, triangle, round, oval or irregular. It should be noted that the pore shape is usually measured by pore size or pore area. The pore size is commonly 50–265 μm in diameter. Its observed size depends on deposition or detection methods, deposition pressures, perspiration activity and fingerprint donors, etc. Ashbaugh suggested that the pore area wasn't reliable for individualization with no evidence to support his assertion. One study has explored the influence of different detection methods on pore area. It advocated the pore area was unchanged in high‐quality inked prints while latent and livescan prints didn′t accurately reproduce the pore area. On the contrary, the research studied by Sutton et al. showed that the pore area of inked fingerprints was not reliable, independent of the deposition substrate. The group also found the parameter was variable in fingerprints developed using cyanoacrylate or ninhydrin methods. Fu et al. further indicated the pore area of inked fingerprints varied when different ink quantities or deposition pressures are applied. Specifically, the pore size decreased with the ink amount or deposition pressure increasing. The above results demonstrated the ink or conventional visualization methods contributed a lot to the variability of the pore area or size. As the direct microscopic imaging alternative could avoid the deposition effect and physical uncertainty, Sutton's team observed the pore area through direct fingerprint photographs and found the day had a significant impact on pore area measurement. Concretely, the pore area was reproducible over one hour but not for one month. In 2011, Oklevski found the pore size of inked fingerprint samples changed over a dactyloscopy time interval of 48 years, which strengthened the unreliability of the pore area parameter. Cao et al. indicated dynamic changing was individual‐dependent and occurred to the sweat pore size along with the epidermal replacement (over 28 days). Dhall et al. later proved the irreproducibility of the pore area over ten consecutive days. Additionally, better pore quality was achieved on the sticky side of adhesive tape than on glass substrates. Recently, Zhou et al. argued that although the collection period affected the pore size, the variation was far less obvious than the changes caused by deposition or detection methods and pressures. Our recent work, published in 2021, also drew the same conclusion that the pore area was subject to high variability in different depositions. Pore frequency is another feature that fascinated researchers. Locard discovered the pore number may vary from 9–18 pores/cm ridge. Another statistical analysis showed the pore density was 419–519 pores/cm 2 . Gupta's group validated that the pore frequency in the periphery position of fingerprints had a significant correlation in the Index and Ring fingers. The pore impressions may be open in one and closed in the other due to the difference in sweat glands secreting activity, deposition pressure, detection or capture methods, etc. One previous study suggested two inked impressions printed by the same finger displayed a large disparity in pore density. The reason was explained by Luo et al. and ink deposition can′t well reflect the sweat pore number, especially for thick ink printings or donors with small pores. Fu et al. proposed deposition pressure was another factor in that a pore would undergo distortion and stretch to occupy the openings if applied pressure. They found the pore number was well‐reflected under low pressure and decreased with pressure increasing. Monson et al. systematically assessed the reproducibility of level 3 details over time by considering the influence of capture methods. In detail, the direct photographs presented the pores whose frequency did not vary even for a ten‐year interval, while for holographic or ink rolled impressions, the pore seemed to be obscured over a one‐month observation. Additionally, livescan methods failed to display the same level details captured by the other methods, particularly the third level. Singh et al. proposed that the detected pore frequency differed considerably, which depended upon the substrate types that LFPs were deposited on, the enhancement methods that were used for processing LFPs, etc. Besides, the detected number of pores had consistencies with that of the minutiae. Interestingly, livescan prints obtained every hour within eight hours indicated sweat pores didn't periodically close and open. The hypothesis was opposite to previous findings that the closed pore at one stage has been found open at another time point. They proposed the main reason was perhaps not only due to pores’ physiology but owning to ink and pressure as well. However, additional experiment data should be presented to draw such a conclusion. The observation interval (one hour) may be too long for pore activity. In other words, the pore activity was unknown during the non‐observation period, where sweat pores may periodically close and open. Hence, the pores of live scan prints should be observed within a short interval or real‐time monitored. We recently reported the number of sweat pores was consistent with that of the live fingertip and further confirmed the theory of Singh et al. that the presented pore number was the same in several depositions when we eliminated the effects of ink and pressures. The position of pores was also extensively investigated and inspired high hopes for individualization. It refers to not only the relative location on the friction ridge but also pore‐to‐pore location, distance, as well as the shape they form together. The pores of inked fingerprints were reported to retain their spatial position relative to one another over 48 years. Luo et al. further emphasized the pore location remained relatively stable with the pressures employed at 200 g, 600 g and 1000 g. Monson et al. validated the pore location captured by direct photographs kept unchanged even for a ten‐year interval. The low effect of substrates and development methods on relative pore location was also detected by Singh's group. Zhou et al. subsequently published an article about the reproducibility of pore‐to‐pore distance and angle over 21 years. Pore‐to‐pore angle gave an excellent reflection on the location of pore groups and was then proved to be more stable than interspace. Our group compared the frequency distribution of the distance between adjacent sweat pores in three independent depositions, whose results were consistent with earlier research. Very recently, Dhall's team also suggested the pore inter‐distance and angle were found to be reliable and reproducible on glass and adhesive tape substrates. Nevertheless, in the year 2020, Wang et al. discussed the pore location drift over one month. The experiment results demonstrated the pore location observed in either the direct microscopic photography or ink impression suffered the shift in both the longitudinal and transverse directions (maximum up to 166.46 μm, 61.00 μm, respectively). Furthermore, the relative pore location on the friction ridge was found susceptible to the deposition pressure and secreting activity in Cao‘s work.
Ridge edges, widths and incipient ridges The level 3 details often have been limited to the consideration of pores, whereas it can be broadened to include shapes of ridge edges, ridge width as well as incipient ridges. The shape of ridge edges is classified into seven types (straight, convex, peak, table, pocket, concave and angle) by Chatterjee. The diverse shape types are formed by the differential growth of the ridge units and the pores near the ridge edge. After Oklevski examined the 100 pairs of inked impressions, it was detected that the edge feature number decreasing with the capture interval time increasing. The researchers believed the susceptibility of the ridge edges to deformation and damage could account for this observation. Meanwhile, the decline of edge feature quality occurred, where the concave edge features showed the greatest stability. Our findings indicated the ridge shape was well retained on nitrocellulose (NC) membranes with a constant deposition pressure of about 250 g. Another level 3 parameter involving ridges is supposed to be ridge width which was commonly 200–500 μm. Like edge shape, it was extremely vulnerable to deposition pressures and would become widen if the applied pressing force increased. When the pressure was applied at less than 300 g, ridge width increased significantly, but slowly when pressure was over 300 g. Besides, the variation of ridge width could be negligible by keeping constant press force. Without a doubt, width variation is also possibly attributable to physiological occurrences such as weight gain/loss, usage, gouty deformation, or age. Incipient ridges located at furrow regions, are generally thinner and lower than papillary ridges and may not be detected in fingerprint impressions. Moreover, they rarely bifurcate and rarely contain pores. Stücker et al. reported that older people (>20 years old) demonstrated a higher frequency in incipient ridges than the younger group (<20 years old). In the study published by Silva, the number of incipient ridges increased with age among males. Different from that observed in males, there was a reduction in the number of incipient ridges among older females. Wentworth et al. found no variation was detected in incipient ridges by observing a child's inked impressions collected every two years within ten years. Conversely, Monson et al. obtained a conclusion that incipient ridges of both live fingertips and ink impressions are variable even in a two‐month interval. In the year 2013, Fu's group proved the pressure force played role in the reproducibility of incipient ridges. The incipient ridges were widening, distorted and even disappeared when applied excess deposition force.
Visualization Techniques of Level 3 Features Admittedly, the low usage rate of the third‐level features is mainly ascribed to unreliable enhancement methods for fingerprints. Specifically, early fingerprint treatments aiming at the extraction of level 1 and 2 details, ignore the significance of the third‐level features, which leads to information omissions. Commonly, fingerprints left at crime scenes are invisible to our naked eyes. Although they may carry a certain amount of microscopic features, they will not be detected if no reliable visualization techniques are employed. As mentioned in section 2, there exist considerable level 3 details that can be utilized for problematic fingerprint (recognition and even donor profiling. Hence, researchers are called upon to develop reliable level 3 feature enhancement techniques and subsequent advancements have occurred in this field. Here, we classified the achievements into four categories including techniques based on physical interaction, residue‐responsive reagents, mass spectrometry (MS) methods and electrochemical techniques. As a note, only the methods which can accurately and reliably detect level 3 features will be involved in this section. To get an intuitive insight into the advances, a summary table is presented in Table . 3.1 Techniques based on physical interaction 3.1.1 Techniques based on electrostatic adsorption With the implementation of nanotechnology over recent years, fingerprint enhancement especially for the third‐level details has taken a step forward owing to the excellent physical and electronic properties of various nanomaterials.[ , , ] Particularly, quantum dots (QDs) with good performance have been reported to allow LFP imaging with high contrast. In the year 2017, Wu et al. utilized red‐emitting N‐acetylcysteine‐capped CdTe QDs (N−L‐Cys‐capped CdTe QDs) reagent to visualize eccrine LFPs. The fingerprints deposited on aluminium foil were quickly exhibited in about 5 s after being immersed in the as‐prepared solution. The numbers of level 3 features such as sweat pores were found accurately mapped and their numbers detected were significantly larger than those processed by the cyanoacrylate agent. However, the reagent was expensive, contained toxic heavy metal ions and was prepared with complicated procedures. More importantly, the level 3 details weren't entirely detected with this QDs‐staining method rare‐earth doped luminescent nanomaterials are considered to be an alternative for visualizing LFPs on both porous and non‐porous surfaces due to their excellent fluorescent property, high chemical stability and high affinity with fingerprint residues. Nagabhushana's group realized the rapid detection of fingerprints using Sm 3+ doped calcium zirconate nanophosphors (CaZrO 3 : Sm 3+ ) prepared via an environmental‐friendly solution combustion route. Notably, the sweat pore shapes of fingerprints on the glass, namely circle, triangle, open, etc. could be obviously identified. Unfortunately, this nanophosphor seemed to have no contribution to eliminating background interference and could cover the level 3 details when being excessively used. Later, this group adopted Pr 3+ activated LaOF nanophosphors (LaOF: Pr 3+ ) for imaging level 3 structures. They fabricated alkali metal ions blended LaOF: Pr 3+ via the eco‐friendly ultrasound‐assisted sonochemical method and the as‐obtained product emitted bright red light under 254 nm UV light. Both the open and closed sweat pores were then detected in revealed fingerprints by SEM examination. Inevitably, it should be noted that the background hindrance could be eliminated except for substrates with background fluorescence. However, the powder reagent may adhere to pore regions due to its nonselective physical adsorption visualization mechanism. Moreover, the powder particles especially those at the nanoscale easily aggregate to a larger size which will result in the distortion of level 3 features. Additionally, the fingerprint brush could damage the fingerprint ridges during the visualization process. As a result, the powder may cover or damage some microscopic details and even cause pseudo characteristics. 3.1.2 Techniques based on hydrophilic‐hydrophobic interaction Aggregation‐induced emission (AIE) materials have drawn extensive interest for wide applications owing to their colourful fluorescence with high contrast, low toxicity and easy functionalization. Since 2012, they have been employed to reveal LFPs along with limitations discovered in practice: (i) Organic solvents used will do damage to residues while powders harm the forensic technicians; (ii) there exist post‐treatments after fingerprint visualization, such as removing excess dye with water or air; (iii) they are customarily suitable for non‐porous substrates; and (iv) most dyes are excited under 365 nm light, which will cause damage to the technicians and fingerprint residues such as DNA. To address the problems above, an AIE‐based water‐soluble probe, TPA‐1OH was designed without any cosolvent and stabilizer which could emit strong red fluorescence under visible light excitation (405 nm). Its amphiphilicity made it possible to adhere to fingerprint residues through hydrophobic‐hydrophobic interaction between the lipophilic end of TPA‐1OH and the lipid secretions. Moreover, the electrostatic interaction between the positively charged TPA‐1OH and the negatively charged residues was also helpful for fingerprint enhancement. As depicted in Figure , the sweat pores with a diameter of 80–120 μm were found to distribute periodically along the ridges with 100–200 μm interspace. Noteworthily, the detected pores and ridge shapes were consistent with those of live fingertips. Besides AIE materials, there still exist other methods whose reagents can interact with fingerprints through hydrophilicity or hydrophobicity. In the year 2017, our group developed a fast and reliable visualization method using hydrophilic cellulose membrane and dye aqueous solution. The LFPs deposited on various substrates could be detected through the pre‐treatment of membrane transference. In this approach, when the fingerprint/membrane samples were put onto the solution, the relatively hydrophobic fingerprint residues acted as a “mask” which directed dye aqueous solution to occupy the furrows and bare membrane other than ridges. Recently, we developed the sebaceous LFPs deposited on NC membranes with only water, and then the high‐resolution optical micrographs were captured. From the picture in Figure (a), level 3 features, including all dimensional attributes of the ridges and pores can be accurately and reproducibly extracted. Additionally, the third‐level details of water‐developed fingerprints, especially pores, ridge contours and widths, were one‐to‐one matched to those of live fingertips (Figure (b)–(c)). Unfortunately, using NC membranes to lift fingerprints on problematic substrates such as skin exhibited fewer level 3 details than those directly deposited on NC membranes. 3.1.3 Techniques based on dissolution effect Poly (vinyl alcohol) (PVA) materials whose properties are in favour of fingerprint preservation have attracted forensic researchers′ attention. An article published in 2020, described a super‐soft and water‐sensitive PVA electrospun nanopaper for in situ mapping the entire fingerprint characteristics at three levels (Figure ). The nanopaper possessed two properties that guaranteed the successful detection of fingerprint details: (i) Ultra‐softness. Once deposited on the paper, the friction ridge contacted area would be stacked while the furrow regions that didn't touch the paper would maintain fluffy; (ii) water sensitivity. A tiny amount of sweat secreted through pores could quickly and selectively dissolve the nanopaper and thereby achieved sweat pores mapping. As shown in Figure (g)–(h), a systematic statistic was also conducted in this work. The results demonstrated the pore‐to‐pore distance ranged 140–300 μm and the pore sizes were about 45–52 μm. As this method exhibited excellent performance, it is urgent to discuss whether the PVA nanopaper can transfer fingerprints on various substrates. 3.2 Techniques based on residue‐responsive reagents Endogenous fingerprint residues are mainly secreted by exocrine sweat glands and sebaceous glands including water, inorganic salts, amino acids, polypeptide, proteins, fatty acids, urea, squalene, etc. It is worth noting that all the components can be the foundation of LFP visualization. Particularly, water taking up a very high proportion about 98–99 % of eccrine sweat, has led to the emergence of numerous detection methods based on water‐responsive reagents. 3.2.1 Techniques based on water‐responsive reagents Commercial thermoplastic polyurethane (TPU) resin has a release‐induced response (RIR). In the year 2011, Chen et al. prepared TPU and fluorescein (TPU/fluorescein) electrospun mats for facile collection and identification of LFPs on various surfaces. When the water in fingerprint residues contacted the TPU/fluorescein electrospun mat, a crosslinking behaviour between TPU and the residues of fingerprints led to the phase separation between the TPU network and fluorescein. As a result, the fingerprint ridges display an obvious change in color to red. Figure presents the transfer procedure and effectiveness of the TPU/fluorescein electrospun mat for LFPs. The results showed that LFPs could be transferred from various surfaces and quickly developed by heating with hot air (100 °C) in 30 s, which was suited to on‐site detection of LFPs. However, the pores and ridge edge of lifted fingermarks seemed poorly enhanced when they were deposited on polypropylene film, marble and wood. Owing to their hydrochromic property, polydiacetylenes (PDAs) have been actively investigated for applications in humidity monitoring, water content detection of organic solvents, water‐jet‐based rewritable printing and sweat pore mapping, etc. In the year 2014, Kim and his co‐workers reported hydrochromic conjugated polymer (PDA‐coated PET film) could map the human sweat pores (Figure (a‐1)). Intriguingly, a tiny amount of water produced from sweat pores led to a blue‐to‐red colour change along with the fluorescence emission when the fingertip contacted the as‐prepared film. After superimposing the mapped pores on a fingerprint scanning image, they concluded that the technique could differentiate the activity of sweat pores (Figure (a‐2)). However, this technique is required to screen hygroscopic elements and diacetylene monomers whose prices were expensive. In addition, the PDA films were too sensitive to enable sweat pore mapping under such environments with relative humidity over 80 %. Subsequently, this team designed a new strategy for improving the issues mentioned above. The water‐responsive fluorescein and a hydrophilic matrix polyvinylpyrrolidone (PVP) were used for sweat pore detection (Figure (b‐1)). Fortunately, the cost‐effective fluorescein–PVP film was stable in a wide range of humidity around 20–90 %, whereas sensitive to sweat. To further improve the property of hydrochromic films, the imidazolium containing DA monomer (DA‐1) was employed by Kim et al. The chemical structure of DA‐1 and the stepwise procedure for mapping sweat pores are presented in Figure7(c). Specifically, the amphiphilic DA‐1 could be readily inkjet‐printed on conventional paper which subsequently polymerized to PDA after UV‐irradiation (30 s) and became blue as well. Once a fingertip pressed on the blue‐coloured PDA‐coated paper, an immediate colour change from blue to red as well as red fluorescence emission would happen and thus achieve sweat pore mapping on the skin. The colour of as‐produced DA‐1‐derived PDA paper maintained unchanged even in a moisture condition whose humidity was above 90 %. Undoubtedly, the pores distributed in palms, toes and soles were accurately recorded through such a PDA‐coated paper. In 2017, this group developed a polydiacetylene‐polyethylene oxide (PDA‐PEO) composite film, which underwent a blue‐to‐red colour change once encountered water (a nanolitre of sweat) and successfully achieved human sweat pores imaging. Surprisingly, the flexibility of the PDA‐PEO film made it possible to visualize sweat pores of highly curved skin surfaces such as the nose. Meanwhile, the hydrochromic carbon nanodots (CDs) create another avenue for level 3 details detection because of their unique optical properties. Shen et al. reported a supra‐CD pore mapping system by coating supra‐CDs self‐assembled by dodecyl‐functionalized CDs (CD−Ps) on filter paper. Its water‐responsive behaviour was ascribed to the decomposition of the supra‐CDs when contacting water. Notably, the strong emission of supra CD‐coated paper wouldn't be extinguished even after the water evaporated. Nevertheless, whether the as‐obtained material could be used to lift fingerprints on various substrates is still unknown. Moreover, only mapping sweat pores will cause characteristic information missing as sweat pores may vary time‐to‐time. Lanthanide metal‐organic frameworks (Ln‐MOFs), ideal candidates for sweat pore mapping, are recently designed by Zhou et al. They converted into magenta light after reacting with water in a response time of 180 s. Although they offer not only pore information but also pattern type and minutiae points, the potential for the practical transference of fingerprints should be included in further investigation. 3.2.2 Techniques based on phosphate‐responsive reagents Phosphate (Pi) is rich in eccrine sweat (1.4 mg/L). Huang et al. designed a Pi‐responsive PVA electrospun nanofibrous (NFs) membrane where the assembled dual‐emission microrods of carbon quantum dots (CQDs) with Eu (III) ion ((CQDs)‐Eu (III)) are embedded (PVA/microrods). The preparation procedure and application in fingerprint visualization were demonstrated in Figure . The membrane had strong red emission under UV irradiation due to the aggregation‐induced Dexter energy transfer from CQDs to Eu (III) ions. When a fingertip touched the as‐prepared membrane, Pi in sweat secretions could bind with the Eu (III) ions and block the Dexter energy transfer from CQDs to Eu (III) ions, leading to the recovery of the blue fluorescence of CQDs. As a result, the ridge‐occupied area emitted a blue fluorescence under UV irradiation and even presented the sweat pore distributed along the papillary ridges. The PVA/microrods membrane could be made into paper and enabled to identify the person who touched the PVA/microrods document through fingerprint analysis. Moreover, it would be of additional value if the PVA/microrods membranes were applied to lift LFPs on various substrates. 3.2.3 Techniques based on immunolabeling reagents Besides water, protein and polypeptide, whose content is 150–250 mg/L, have been regarded as the most abundant components in eccrine secretion. To date, a few proteins including albumin, keratins 1/10, cathepsin D, dermcidin, lysozyme and EGF have been identified in fingerprints through various techniques. The level 3 detection through immunolabel method dated back to the year 2009. Drapel and her co‐workers used anti‐keratin 1/10, anti‐cathepsin‐D and anti‐dermcidin to visualize fingerprints deposited on polyvinylidene fluoride (PVDF) membranes, non‐whitened papers and whitened papers. The experiment results showed the revealed fingerprints on PVDF obtained the best quality. Furthermore, antigens originating from the epidermis gave well‐defined ridge edges (keratins 1 and 10; cathepsin‐D) whereas antigens secreted by sweat glands offered pore information (dermcidin). The pore mapping presented in Figure was revealed by anti‐dermcidin reagents. To enhance the immunodetection signal, visible dyes, organic fluorophores and nanoparticles were later investigated to be tagged to the secondary antibody.[ , , , , ] As a result, the immunolabeling application scenario for various substrates was expanded and subsequently proved to be fitted in DNA analysis. Since the amount of dermcidin secreted is found to be variable and sometimes tiny, multi‐target immunolabeling approaches that can simultaneously react with several peptides have exhibited great potential for high‐quality pore visualization recently. Compared with antigen‐antibody interactions, the aptamer recognition methods open a facile pathway for the detection of level 3 details on account of their exceptionally high specificity and affinity to fingerprint residues. Liu et al. reported a lysozyme‐binding aptamer (LBA)‐modified sandwich‐structured Au/pNTP/SiO 2 surface‐enhanced Raman scattering (SERS) probe. After SERS imaging, the second and third‐level details could be obviously distinguished, especially for eccrine prints on glass substrates (Figure ). Noteworthily, the Au/pNTP/SiO 2 ‐LBA probe deposited on eccrine prints (Figure (c)) was more than that on sebaceous prints (Figure (d)) indicating the higher content of lysozyme in eccrine secretions. 3.3 Imaging Mass spectrometry (IMS) techniques IMS spectrometry has drawn a lot of attention in recognizing and imaging the chemical fingerprint components. Among the numerous mass spectrometry techniques, time‐of‐flight secondary ion mass spectrometry (ToF‐SIMS) and matrix‐assisted laser desorption ionization mass spectrometry (MALDI‐MS) occupy an absolute advantageous position in sensing level 3 features of fingerprints. Thus, we primarily summarized the two techniques in detail. 3.3.1 MALDI‐MS imaging techniques Abel and Elsner utilized a MALDI‐MS technique to selectively image the interesting region of LFPs assisted by optical positioning. Promisingly, the whole acquisition time lasted for a few minutes, which was 2–3 orders of magnitude faster than conventional full MS scanning. This combined optical and MS imaging could offer level 1–3 features, not just pore information. Admittedly, it is challenging to assign selected signals to physiological substances of fingerprint residues. One year later, Voelcker et al. employed the MALDI‐ToF/ToF MS technique to achieve nanostructural imaging on Ag layers (0.4–3.2 nm) coated porous wafer silicon (Ag‐coated pSi). Mass accuracy of this method was improved by more than an order of magnitude and thereby could visualize fingerprints along with their level 3 details. 3.3.2 ToF‐SIMS imaging techniques ToF‐SIMS has superior spatial resolution and do less destruction to fingerprint samples than the MALDI‐MS imaging technique. It was initially introduced by Bailey et al. with a discussion about the feasibility of fingerprint detection using such mass spectrometry method. The three scenarios illustrated by this group delivered a good signal that fingerprints could be enhanced using ToF‐SIMS even for those which were poorly developed with conventional methods. In the year 2017, Graphene oxide (GO)‐enhanced ToF‐SIMS was reported to detect poison, alkaloids (>600 Da) and controlled drugs, and antibiotics (>700 Da) of relatively high mass molecules in contaminated fingerprints as well as endogenous substances (Na + , K + ). Delicate fingerprint characteristics reaching the third level are obtained as presented in Figure . The pore sizes, shapes and distribution could be clearly observed. The pore in Figure (d) was a triangle whilst seemed round in Figure (e). Another group then attempted to broaden its application to fingerprints left on the stainless steel. Results showed it was capable of identifying pore level details even for those who were deposited for 26 days. Very recently, Li's group has attempted to image fingerprints on banknotes according to the signal molecular ions and fragment ion peaks of both endogenous chemicals and contaminants. Certainly, the pore structure could also be captured rough substrates more than smooth surfaces. Although the methods mentioned have exceptional performance in sweat pore detection, the long scanning time might hinder the implementation in forensic investigation practice. 3.4 Electrochemical techniques Over the past two decades, unavoidable background interference has driven the development of electrochemical methods for fingerprint imaging. Intriguingly, two methods, electrochemiluminescence (ECL) and scanning electrochemical microscopy (SECM) techniques have already been reported to accurately and reliably visualize level 3 features owing to their sensitivity, good controllability and low toxicity. Below is the achieved progress in the imaging of third‐level characteristics through those methods. 3.4.1 Fingerprint level 3 detail imaging by ECL ECL is commonly generated by certain electrochemical reactions triggered by a potential. Su's group pioneered the application of ECL to fingerprint visualization in 2012. In principle, the sebaceous residues of fingerprints on conductive substrates act as a mask or template. By spatially controlling the ECL reactions to occur in either the bare surface or ridge‐occupied area, the negative mode and positive mode of fingerprint imaging can be obtained (Figure12(a)). As demonstrated in Figure (b), the sebaceous fingerprints of seven months old could be clearly visualized with partial sweat pores located along the ridges. They also performed ECL by using a highly electrochemiluminescent molecule, namely rubrene. In positive mode, the papillary ridges illuminated ECL with the dark background, eventually generating a fingerprint impression whose level 3 features could be identified. Although ECL is rapid and sensitive for imaging LFPs, it is restrictive to only conductive substrates. 3.4.2 Fingerprint level 3 detail imaging by SECM SECM has been successfully applied to electrochemical image substrate topography and local reactivity with high resolution. It has been proven by Girault's group that silver‐stained proteins on PVDF membranes can be visualized by recording the tip current signal generated by the oxidation of the mediator K 3 IrCl 6 . Afterward, this group initially performed SECM imaging on silver‐stained fingerprints. Satisfactory results were obtained despite excessive silver staining and feature being covered. Our group has been working on label‐free fingerprint imaging using SECM without pre‐treatment procedures, such as silver‐staining. Theoretically, the mediator methyl viologen could selectively react with the electroactive species residues rather than furrow regions, resulting in the sharp contrast current change between ridges and furrows. Figure (d) and (e) illustrate the feasibility of this label‐free method for visualizing level 3 details. More interestingly, the fingerprints deposited on other surfaces such as glass could be imaged after conducting the membrane‐lifting procedure. Additionally, the imaging time could be reduced if we combined optical microscopy methods once fingerprints were transferred by the NC membrane.
Techniques based on physical interaction 3.1.1 Techniques based on electrostatic adsorption With the implementation of nanotechnology over recent years, fingerprint enhancement especially for the third‐level details has taken a step forward owing to the excellent physical and electronic properties of various nanomaterials.[ , , ] Particularly, quantum dots (QDs) with good performance have been reported to allow LFP imaging with high contrast. In the year 2017, Wu et al. utilized red‐emitting N‐acetylcysteine‐capped CdTe QDs (N−L‐Cys‐capped CdTe QDs) reagent to visualize eccrine LFPs. The fingerprints deposited on aluminium foil were quickly exhibited in about 5 s after being immersed in the as‐prepared solution. The numbers of level 3 features such as sweat pores were found accurately mapped and their numbers detected were significantly larger than those processed by the cyanoacrylate agent. However, the reagent was expensive, contained toxic heavy metal ions and was prepared with complicated procedures. More importantly, the level 3 details weren't entirely detected with this QDs‐staining method rare‐earth doped luminescent nanomaterials are considered to be an alternative for visualizing LFPs on both porous and non‐porous surfaces due to their excellent fluorescent property, high chemical stability and high affinity with fingerprint residues. Nagabhushana's group realized the rapid detection of fingerprints using Sm 3+ doped calcium zirconate nanophosphors (CaZrO 3 : Sm 3+ ) prepared via an environmental‐friendly solution combustion route. Notably, the sweat pore shapes of fingerprints on the glass, namely circle, triangle, open, etc. could be obviously identified. Unfortunately, this nanophosphor seemed to have no contribution to eliminating background interference and could cover the level 3 details when being excessively used. Later, this group adopted Pr 3+ activated LaOF nanophosphors (LaOF: Pr 3+ ) for imaging level 3 structures. They fabricated alkali metal ions blended LaOF: Pr 3+ via the eco‐friendly ultrasound‐assisted sonochemical method and the as‐obtained product emitted bright red light under 254 nm UV light. Both the open and closed sweat pores were then detected in revealed fingerprints by SEM examination. Inevitably, it should be noted that the background hindrance could be eliminated except for substrates with background fluorescence. However, the powder reagent may adhere to pore regions due to its nonselective physical adsorption visualization mechanism. Moreover, the powder particles especially those at the nanoscale easily aggregate to a larger size which will result in the distortion of level 3 features. Additionally, the fingerprint brush could damage the fingerprint ridges during the visualization process. As a result, the powder may cover or damage some microscopic details and even cause pseudo characteristics. 3.1.2 Techniques based on hydrophilic‐hydrophobic interaction Aggregation‐induced emission (AIE) materials have drawn extensive interest for wide applications owing to their colourful fluorescence with high contrast, low toxicity and easy functionalization. Since 2012, they have been employed to reveal LFPs along with limitations discovered in practice: (i) Organic solvents used will do damage to residues while powders harm the forensic technicians; (ii) there exist post‐treatments after fingerprint visualization, such as removing excess dye with water or air; (iii) they are customarily suitable for non‐porous substrates; and (iv) most dyes are excited under 365 nm light, which will cause damage to the technicians and fingerprint residues such as DNA. To address the problems above, an AIE‐based water‐soluble probe, TPA‐1OH was designed without any cosolvent and stabilizer which could emit strong red fluorescence under visible light excitation (405 nm). Its amphiphilicity made it possible to adhere to fingerprint residues through hydrophobic‐hydrophobic interaction between the lipophilic end of TPA‐1OH and the lipid secretions. Moreover, the electrostatic interaction between the positively charged TPA‐1OH and the negatively charged residues was also helpful for fingerprint enhancement. As depicted in Figure , the sweat pores with a diameter of 80–120 μm were found to distribute periodically along the ridges with 100–200 μm interspace. Noteworthily, the detected pores and ridge shapes were consistent with those of live fingertips. Besides AIE materials, there still exist other methods whose reagents can interact with fingerprints through hydrophilicity or hydrophobicity. In the year 2017, our group developed a fast and reliable visualization method using hydrophilic cellulose membrane and dye aqueous solution. The LFPs deposited on various substrates could be detected through the pre‐treatment of membrane transference. In this approach, when the fingerprint/membrane samples were put onto the solution, the relatively hydrophobic fingerprint residues acted as a “mask” which directed dye aqueous solution to occupy the furrows and bare membrane other than ridges. Recently, we developed the sebaceous LFPs deposited on NC membranes with only water, and then the high‐resolution optical micrographs were captured. From the picture in Figure (a), level 3 features, including all dimensional attributes of the ridges and pores can be accurately and reproducibly extracted. Additionally, the third‐level details of water‐developed fingerprints, especially pores, ridge contours and widths, were one‐to‐one matched to those of live fingertips (Figure (b)–(c)). Unfortunately, using NC membranes to lift fingerprints on problematic substrates such as skin exhibited fewer level 3 details than those directly deposited on NC membranes. 3.1.3 Techniques based on dissolution effect Poly (vinyl alcohol) (PVA) materials whose properties are in favour of fingerprint preservation have attracted forensic researchers′ attention. An article published in 2020, described a super‐soft and water‐sensitive PVA electrospun nanopaper for in situ mapping the entire fingerprint characteristics at three levels (Figure ). The nanopaper possessed two properties that guaranteed the successful detection of fingerprint details: (i) Ultra‐softness. Once deposited on the paper, the friction ridge contacted area would be stacked while the furrow regions that didn't touch the paper would maintain fluffy; (ii) water sensitivity. A tiny amount of sweat secreted through pores could quickly and selectively dissolve the nanopaper and thereby achieved sweat pores mapping. As shown in Figure (g)–(h), a systematic statistic was also conducted in this work. The results demonstrated the pore‐to‐pore distance ranged 140–300 μm and the pore sizes were about 45–52 μm. As this method exhibited excellent performance, it is urgent to discuss whether the PVA nanopaper can transfer fingerprints on various substrates.
Techniques based on electrostatic adsorption With the implementation of nanotechnology over recent years, fingerprint enhancement especially for the third‐level details has taken a step forward owing to the excellent physical and electronic properties of various nanomaterials.[ , , ] Particularly, quantum dots (QDs) with good performance have been reported to allow LFP imaging with high contrast. In the year 2017, Wu et al. utilized red‐emitting N‐acetylcysteine‐capped CdTe QDs (N−L‐Cys‐capped CdTe QDs) reagent to visualize eccrine LFPs. The fingerprints deposited on aluminium foil were quickly exhibited in about 5 s after being immersed in the as‐prepared solution. The numbers of level 3 features such as sweat pores were found accurately mapped and their numbers detected were significantly larger than those processed by the cyanoacrylate agent. However, the reagent was expensive, contained toxic heavy metal ions and was prepared with complicated procedures. More importantly, the level 3 details weren't entirely detected with this QDs‐staining method rare‐earth doped luminescent nanomaterials are considered to be an alternative for visualizing LFPs on both porous and non‐porous surfaces due to their excellent fluorescent property, high chemical stability and high affinity with fingerprint residues. Nagabhushana's group realized the rapid detection of fingerprints using Sm 3+ doped calcium zirconate nanophosphors (CaZrO 3 : Sm 3+ ) prepared via an environmental‐friendly solution combustion route. Notably, the sweat pore shapes of fingerprints on the glass, namely circle, triangle, open, etc. could be obviously identified. Unfortunately, this nanophosphor seemed to have no contribution to eliminating background interference and could cover the level 3 details when being excessively used. Later, this group adopted Pr 3+ activated LaOF nanophosphors (LaOF: Pr 3+ ) for imaging level 3 structures. They fabricated alkali metal ions blended LaOF: Pr 3+ via the eco‐friendly ultrasound‐assisted sonochemical method and the as‐obtained product emitted bright red light under 254 nm UV light. Both the open and closed sweat pores were then detected in revealed fingerprints by SEM examination. Inevitably, it should be noted that the background hindrance could be eliminated except for substrates with background fluorescence. However, the powder reagent may adhere to pore regions due to its nonselective physical adsorption visualization mechanism. Moreover, the powder particles especially those at the nanoscale easily aggregate to a larger size which will result in the distortion of level 3 features. Additionally, the fingerprint brush could damage the fingerprint ridges during the visualization process. As a result, the powder may cover or damage some microscopic details and even cause pseudo characteristics.
Techniques based on hydrophilic‐hydrophobic interaction Aggregation‐induced emission (AIE) materials have drawn extensive interest for wide applications owing to their colourful fluorescence with high contrast, low toxicity and easy functionalization. Since 2012, they have been employed to reveal LFPs along with limitations discovered in practice: (i) Organic solvents used will do damage to residues while powders harm the forensic technicians; (ii) there exist post‐treatments after fingerprint visualization, such as removing excess dye with water or air; (iii) they are customarily suitable for non‐porous substrates; and (iv) most dyes are excited under 365 nm light, which will cause damage to the technicians and fingerprint residues such as DNA. To address the problems above, an AIE‐based water‐soluble probe, TPA‐1OH was designed without any cosolvent and stabilizer which could emit strong red fluorescence under visible light excitation (405 nm). Its amphiphilicity made it possible to adhere to fingerprint residues through hydrophobic‐hydrophobic interaction between the lipophilic end of TPA‐1OH and the lipid secretions. Moreover, the electrostatic interaction between the positively charged TPA‐1OH and the negatively charged residues was also helpful for fingerprint enhancement. As depicted in Figure , the sweat pores with a diameter of 80–120 μm were found to distribute periodically along the ridges with 100–200 μm interspace. Noteworthily, the detected pores and ridge shapes were consistent with those of live fingertips. Besides AIE materials, there still exist other methods whose reagents can interact with fingerprints through hydrophilicity or hydrophobicity. In the year 2017, our group developed a fast and reliable visualization method using hydrophilic cellulose membrane and dye aqueous solution. The LFPs deposited on various substrates could be detected through the pre‐treatment of membrane transference. In this approach, when the fingerprint/membrane samples were put onto the solution, the relatively hydrophobic fingerprint residues acted as a “mask” which directed dye aqueous solution to occupy the furrows and bare membrane other than ridges. Recently, we developed the sebaceous LFPs deposited on NC membranes with only water, and then the high‐resolution optical micrographs were captured. From the picture in Figure (a), level 3 features, including all dimensional attributes of the ridges and pores can be accurately and reproducibly extracted. Additionally, the third‐level details of water‐developed fingerprints, especially pores, ridge contours and widths, were one‐to‐one matched to those of live fingertips (Figure (b)–(c)). Unfortunately, using NC membranes to lift fingerprints on problematic substrates such as skin exhibited fewer level 3 details than those directly deposited on NC membranes.
Techniques based on dissolution effect Poly (vinyl alcohol) (PVA) materials whose properties are in favour of fingerprint preservation have attracted forensic researchers′ attention. An article published in 2020, described a super‐soft and water‐sensitive PVA electrospun nanopaper for in situ mapping the entire fingerprint characteristics at three levels (Figure ). The nanopaper possessed two properties that guaranteed the successful detection of fingerprint details: (i) Ultra‐softness. Once deposited on the paper, the friction ridge contacted area would be stacked while the furrow regions that didn't touch the paper would maintain fluffy; (ii) water sensitivity. A tiny amount of sweat secreted through pores could quickly and selectively dissolve the nanopaper and thereby achieved sweat pores mapping. As shown in Figure (g)–(h), a systematic statistic was also conducted in this work. The results demonstrated the pore‐to‐pore distance ranged 140–300 μm and the pore sizes were about 45–52 μm. As this method exhibited excellent performance, it is urgent to discuss whether the PVA nanopaper can transfer fingerprints on various substrates.
Techniques based on residue‐responsive reagents Endogenous fingerprint residues are mainly secreted by exocrine sweat glands and sebaceous glands including water, inorganic salts, amino acids, polypeptide, proteins, fatty acids, urea, squalene, etc. It is worth noting that all the components can be the foundation of LFP visualization. Particularly, water taking up a very high proportion about 98–99 % of eccrine sweat, has led to the emergence of numerous detection methods based on water‐responsive reagents. 3.2.1 Techniques based on water‐responsive reagents Commercial thermoplastic polyurethane (TPU) resin has a release‐induced response (RIR). In the year 2011, Chen et al. prepared TPU and fluorescein (TPU/fluorescein) electrospun mats for facile collection and identification of LFPs on various surfaces. When the water in fingerprint residues contacted the TPU/fluorescein electrospun mat, a crosslinking behaviour between TPU and the residues of fingerprints led to the phase separation between the TPU network and fluorescein. As a result, the fingerprint ridges display an obvious change in color to red. Figure presents the transfer procedure and effectiveness of the TPU/fluorescein electrospun mat for LFPs. The results showed that LFPs could be transferred from various surfaces and quickly developed by heating with hot air (100 °C) in 30 s, which was suited to on‐site detection of LFPs. However, the pores and ridge edge of lifted fingermarks seemed poorly enhanced when they were deposited on polypropylene film, marble and wood. Owing to their hydrochromic property, polydiacetylenes (PDAs) have been actively investigated for applications in humidity monitoring, water content detection of organic solvents, water‐jet‐based rewritable printing and sweat pore mapping, etc. In the year 2014, Kim and his co‐workers reported hydrochromic conjugated polymer (PDA‐coated PET film) could map the human sweat pores (Figure (a‐1)). Intriguingly, a tiny amount of water produced from sweat pores led to a blue‐to‐red colour change along with the fluorescence emission when the fingertip contacted the as‐prepared film. After superimposing the mapped pores on a fingerprint scanning image, they concluded that the technique could differentiate the activity of sweat pores (Figure (a‐2)). However, this technique is required to screen hygroscopic elements and diacetylene monomers whose prices were expensive. In addition, the PDA films were too sensitive to enable sweat pore mapping under such environments with relative humidity over 80 %. Subsequently, this team designed a new strategy for improving the issues mentioned above. The water‐responsive fluorescein and a hydrophilic matrix polyvinylpyrrolidone (PVP) were used for sweat pore detection (Figure (b‐1)). Fortunately, the cost‐effective fluorescein–PVP film was stable in a wide range of humidity around 20–90 %, whereas sensitive to sweat. To further improve the property of hydrochromic films, the imidazolium containing DA monomer (DA‐1) was employed by Kim et al. The chemical structure of DA‐1 and the stepwise procedure for mapping sweat pores are presented in Figure7(c). Specifically, the amphiphilic DA‐1 could be readily inkjet‐printed on conventional paper which subsequently polymerized to PDA after UV‐irradiation (30 s) and became blue as well. Once a fingertip pressed on the blue‐coloured PDA‐coated paper, an immediate colour change from blue to red as well as red fluorescence emission would happen and thus achieve sweat pore mapping on the skin. The colour of as‐produced DA‐1‐derived PDA paper maintained unchanged even in a moisture condition whose humidity was above 90 %. Undoubtedly, the pores distributed in palms, toes and soles were accurately recorded through such a PDA‐coated paper. In 2017, this group developed a polydiacetylene‐polyethylene oxide (PDA‐PEO) composite film, which underwent a blue‐to‐red colour change once encountered water (a nanolitre of sweat) and successfully achieved human sweat pores imaging. Surprisingly, the flexibility of the PDA‐PEO film made it possible to visualize sweat pores of highly curved skin surfaces such as the nose. Meanwhile, the hydrochromic carbon nanodots (CDs) create another avenue for level 3 details detection because of their unique optical properties. Shen et al. reported a supra‐CD pore mapping system by coating supra‐CDs self‐assembled by dodecyl‐functionalized CDs (CD−Ps) on filter paper. Its water‐responsive behaviour was ascribed to the decomposition of the supra‐CDs when contacting water. Notably, the strong emission of supra CD‐coated paper wouldn't be extinguished even after the water evaporated. Nevertheless, whether the as‐obtained material could be used to lift fingerprints on various substrates is still unknown. Moreover, only mapping sweat pores will cause characteristic information missing as sweat pores may vary time‐to‐time. Lanthanide metal‐organic frameworks (Ln‐MOFs), ideal candidates for sweat pore mapping, are recently designed by Zhou et al. They converted into magenta light after reacting with water in a response time of 180 s. Although they offer not only pore information but also pattern type and minutiae points, the potential for the practical transference of fingerprints should be included in further investigation. 3.2.2 Techniques based on phosphate‐responsive reagents Phosphate (Pi) is rich in eccrine sweat (1.4 mg/L). Huang et al. designed a Pi‐responsive PVA electrospun nanofibrous (NFs) membrane where the assembled dual‐emission microrods of carbon quantum dots (CQDs) with Eu (III) ion ((CQDs)‐Eu (III)) are embedded (PVA/microrods). The preparation procedure and application in fingerprint visualization were demonstrated in Figure . The membrane had strong red emission under UV irradiation due to the aggregation‐induced Dexter energy transfer from CQDs to Eu (III) ions. When a fingertip touched the as‐prepared membrane, Pi in sweat secretions could bind with the Eu (III) ions and block the Dexter energy transfer from CQDs to Eu (III) ions, leading to the recovery of the blue fluorescence of CQDs. As a result, the ridge‐occupied area emitted a blue fluorescence under UV irradiation and even presented the sweat pore distributed along the papillary ridges. The PVA/microrods membrane could be made into paper and enabled to identify the person who touched the PVA/microrods document through fingerprint analysis. Moreover, it would be of additional value if the PVA/microrods membranes were applied to lift LFPs on various substrates. 3.2.3 Techniques based on immunolabeling reagents Besides water, protein and polypeptide, whose content is 150–250 mg/L, have been regarded as the most abundant components in eccrine secretion. To date, a few proteins including albumin, keratins 1/10, cathepsin D, dermcidin, lysozyme and EGF have been identified in fingerprints through various techniques. The level 3 detection through immunolabel method dated back to the year 2009. Drapel and her co‐workers used anti‐keratin 1/10, anti‐cathepsin‐D and anti‐dermcidin to visualize fingerprints deposited on polyvinylidene fluoride (PVDF) membranes, non‐whitened papers and whitened papers. The experiment results showed the revealed fingerprints on PVDF obtained the best quality. Furthermore, antigens originating from the epidermis gave well‐defined ridge edges (keratins 1 and 10; cathepsin‐D) whereas antigens secreted by sweat glands offered pore information (dermcidin). The pore mapping presented in Figure was revealed by anti‐dermcidin reagents. To enhance the immunodetection signal, visible dyes, organic fluorophores and nanoparticles were later investigated to be tagged to the secondary antibody.[ , , , , ] As a result, the immunolabeling application scenario for various substrates was expanded and subsequently proved to be fitted in DNA analysis. Since the amount of dermcidin secreted is found to be variable and sometimes tiny, multi‐target immunolabeling approaches that can simultaneously react with several peptides have exhibited great potential for high‐quality pore visualization recently. Compared with antigen‐antibody interactions, the aptamer recognition methods open a facile pathway for the detection of level 3 details on account of their exceptionally high specificity and affinity to fingerprint residues. Liu et al. reported a lysozyme‐binding aptamer (LBA)‐modified sandwich‐structured Au/pNTP/SiO 2 surface‐enhanced Raman scattering (SERS) probe. After SERS imaging, the second and third‐level details could be obviously distinguished, especially for eccrine prints on glass substrates (Figure ). Noteworthily, the Au/pNTP/SiO 2 ‐LBA probe deposited on eccrine prints (Figure (c)) was more than that on sebaceous prints (Figure (d)) indicating the higher content of lysozyme in eccrine secretions.
Techniques based on water‐responsive reagents Commercial thermoplastic polyurethane (TPU) resin has a release‐induced response (RIR). In the year 2011, Chen et al. prepared TPU and fluorescein (TPU/fluorescein) electrospun mats for facile collection and identification of LFPs on various surfaces. When the water in fingerprint residues contacted the TPU/fluorescein electrospun mat, a crosslinking behaviour between TPU and the residues of fingerprints led to the phase separation between the TPU network and fluorescein. As a result, the fingerprint ridges display an obvious change in color to red. Figure presents the transfer procedure and effectiveness of the TPU/fluorescein electrospun mat for LFPs. The results showed that LFPs could be transferred from various surfaces and quickly developed by heating with hot air (100 °C) in 30 s, which was suited to on‐site detection of LFPs. However, the pores and ridge edge of lifted fingermarks seemed poorly enhanced when they were deposited on polypropylene film, marble and wood. Owing to their hydrochromic property, polydiacetylenes (PDAs) have been actively investigated for applications in humidity monitoring, water content detection of organic solvents, water‐jet‐based rewritable printing and sweat pore mapping, etc. In the year 2014, Kim and his co‐workers reported hydrochromic conjugated polymer (PDA‐coated PET film) could map the human sweat pores (Figure (a‐1)). Intriguingly, a tiny amount of water produced from sweat pores led to a blue‐to‐red colour change along with the fluorescence emission when the fingertip contacted the as‐prepared film. After superimposing the mapped pores on a fingerprint scanning image, they concluded that the technique could differentiate the activity of sweat pores (Figure (a‐2)). However, this technique is required to screen hygroscopic elements and diacetylene monomers whose prices were expensive. In addition, the PDA films were too sensitive to enable sweat pore mapping under such environments with relative humidity over 80 %. Subsequently, this team designed a new strategy for improving the issues mentioned above. The water‐responsive fluorescein and a hydrophilic matrix polyvinylpyrrolidone (PVP) were used for sweat pore detection (Figure (b‐1)). Fortunately, the cost‐effective fluorescein–PVP film was stable in a wide range of humidity around 20–90 %, whereas sensitive to sweat. To further improve the property of hydrochromic films, the imidazolium containing DA monomer (DA‐1) was employed by Kim et al. The chemical structure of DA‐1 and the stepwise procedure for mapping sweat pores are presented in Figure7(c). Specifically, the amphiphilic DA‐1 could be readily inkjet‐printed on conventional paper which subsequently polymerized to PDA after UV‐irradiation (30 s) and became blue as well. Once a fingertip pressed on the blue‐coloured PDA‐coated paper, an immediate colour change from blue to red as well as red fluorescence emission would happen and thus achieve sweat pore mapping on the skin. The colour of as‐produced DA‐1‐derived PDA paper maintained unchanged even in a moisture condition whose humidity was above 90 %. Undoubtedly, the pores distributed in palms, toes and soles were accurately recorded through such a PDA‐coated paper. In 2017, this group developed a polydiacetylene‐polyethylene oxide (PDA‐PEO) composite film, which underwent a blue‐to‐red colour change once encountered water (a nanolitre of sweat) and successfully achieved human sweat pores imaging. Surprisingly, the flexibility of the PDA‐PEO film made it possible to visualize sweat pores of highly curved skin surfaces such as the nose. Meanwhile, the hydrochromic carbon nanodots (CDs) create another avenue for level 3 details detection because of their unique optical properties. Shen et al. reported a supra‐CD pore mapping system by coating supra‐CDs self‐assembled by dodecyl‐functionalized CDs (CD−Ps) on filter paper. Its water‐responsive behaviour was ascribed to the decomposition of the supra‐CDs when contacting water. Notably, the strong emission of supra CD‐coated paper wouldn't be extinguished even after the water evaporated. Nevertheless, whether the as‐obtained material could be used to lift fingerprints on various substrates is still unknown. Moreover, only mapping sweat pores will cause characteristic information missing as sweat pores may vary time‐to‐time. Lanthanide metal‐organic frameworks (Ln‐MOFs), ideal candidates for sweat pore mapping, are recently designed by Zhou et al. They converted into magenta light after reacting with water in a response time of 180 s. Although they offer not only pore information but also pattern type and minutiae points, the potential for the practical transference of fingerprints should be included in further investigation.
Techniques based on phosphate‐responsive reagents Phosphate (Pi) is rich in eccrine sweat (1.4 mg/L). Huang et al. designed a Pi‐responsive PVA electrospun nanofibrous (NFs) membrane where the assembled dual‐emission microrods of carbon quantum dots (CQDs) with Eu (III) ion ((CQDs)‐Eu (III)) are embedded (PVA/microrods). The preparation procedure and application in fingerprint visualization were demonstrated in Figure . The membrane had strong red emission under UV irradiation due to the aggregation‐induced Dexter energy transfer from CQDs to Eu (III) ions. When a fingertip touched the as‐prepared membrane, Pi in sweat secretions could bind with the Eu (III) ions and block the Dexter energy transfer from CQDs to Eu (III) ions, leading to the recovery of the blue fluorescence of CQDs. As a result, the ridge‐occupied area emitted a blue fluorescence under UV irradiation and even presented the sweat pore distributed along the papillary ridges. The PVA/microrods membrane could be made into paper and enabled to identify the person who touched the PVA/microrods document through fingerprint analysis. Moreover, it would be of additional value if the PVA/microrods membranes were applied to lift LFPs on various substrates.
Techniques based on immunolabeling reagents Besides water, protein and polypeptide, whose content is 150–250 mg/L, have been regarded as the most abundant components in eccrine secretion. To date, a few proteins including albumin, keratins 1/10, cathepsin D, dermcidin, lysozyme and EGF have been identified in fingerprints through various techniques. The level 3 detection through immunolabel method dated back to the year 2009. Drapel and her co‐workers used anti‐keratin 1/10, anti‐cathepsin‐D and anti‐dermcidin to visualize fingerprints deposited on polyvinylidene fluoride (PVDF) membranes, non‐whitened papers and whitened papers. The experiment results showed the revealed fingerprints on PVDF obtained the best quality. Furthermore, antigens originating from the epidermis gave well‐defined ridge edges (keratins 1 and 10; cathepsin‐D) whereas antigens secreted by sweat glands offered pore information (dermcidin). The pore mapping presented in Figure was revealed by anti‐dermcidin reagents. To enhance the immunodetection signal, visible dyes, organic fluorophores and nanoparticles were later investigated to be tagged to the secondary antibody.[ , , , , ] As a result, the immunolabeling application scenario for various substrates was expanded and subsequently proved to be fitted in DNA analysis. Since the amount of dermcidin secreted is found to be variable and sometimes tiny, multi‐target immunolabeling approaches that can simultaneously react with several peptides have exhibited great potential for high‐quality pore visualization recently. Compared with antigen‐antibody interactions, the aptamer recognition methods open a facile pathway for the detection of level 3 details on account of their exceptionally high specificity and affinity to fingerprint residues. Liu et al. reported a lysozyme‐binding aptamer (LBA)‐modified sandwich‐structured Au/pNTP/SiO 2 surface‐enhanced Raman scattering (SERS) probe. After SERS imaging, the second and third‐level details could be obviously distinguished, especially for eccrine prints on glass substrates (Figure ). Noteworthily, the Au/pNTP/SiO 2 ‐LBA probe deposited on eccrine prints (Figure (c)) was more than that on sebaceous prints (Figure (d)) indicating the higher content of lysozyme in eccrine secretions.
Imaging Mass spectrometry (IMS) techniques IMS spectrometry has drawn a lot of attention in recognizing and imaging the chemical fingerprint components. Among the numerous mass spectrometry techniques, time‐of‐flight secondary ion mass spectrometry (ToF‐SIMS) and matrix‐assisted laser desorption ionization mass spectrometry (MALDI‐MS) occupy an absolute advantageous position in sensing level 3 features of fingerprints. Thus, we primarily summarized the two techniques in detail. 3.3.1 MALDI‐MS imaging techniques Abel and Elsner utilized a MALDI‐MS technique to selectively image the interesting region of LFPs assisted by optical positioning. Promisingly, the whole acquisition time lasted for a few minutes, which was 2–3 orders of magnitude faster than conventional full MS scanning. This combined optical and MS imaging could offer level 1–3 features, not just pore information. Admittedly, it is challenging to assign selected signals to physiological substances of fingerprint residues. One year later, Voelcker et al. employed the MALDI‐ToF/ToF MS technique to achieve nanostructural imaging on Ag layers (0.4–3.2 nm) coated porous wafer silicon (Ag‐coated pSi). Mass accuracy of this method was improved by more than an order of magnitude and thereby could visualize fingerprints along with their level 3 details. 3.3.2 ToF‐SIMS imaging techniques ToF‐SIMS has superior spatial resolution and do less destruction to fingerprint samples than the MALDI‐MS imaging technique. It was initially introduced by Bailey et al. with a discussion about the feasibility of fingerprint detection using such mass spectrometry method. The three scenarios illustrated by this group delivered a good signal that fingerprints could be enhanced using ToF‐SIMS even for those which were poorly developed with conventional methods. In the year 2017, Graphene oxide (GO)‐enhanced ToF‐SIMS was reported to detect poison, alkaloids (>600 Da) and controlled drugs, and antibiotics (>700 Da) of relatively high mass molecules in contaminated fingerprints as well as endogenous substances (Na + , K + ). Delicate fingerprint characteristics reaching the third level are obtained as presented in Figure . The pore sizes, shapes and distribution could be clearly observed. The pore in Figure (d) was a triangle whilst seemed round in Figure (e). Another group then attempted to broaden its application to fingerprints left on the stainless steel. Results showed it was capable of identifying pore level details even for those who were deposited for 26 days. Very recently, Li's group has attempted to image fingerprints on banknotes according to the signal molecular ions and fragment ion peaks of both endogenous chemicals and contaminants. Certainly, the pore structure could also be captured rough substrates more than smooth surfaces. Although the methods mentioned have exceptional performance in sweat pore detection, the long scanning time might hinder the implementation in forensic investigation practice.
MALDI‐MS imaging techniques Abel and Elsner utilized a MALDI‐MS technique to selectively image the interesting region of LFPs assisted by optical positioning. Promisingly, the whole acquisition time lasted for a few minutes, which was 2–3 orders of magnitude faster than conventional full MS scanning. This combined optical and MS imaging could offer level 1–3 features, not just pore information. Admittedly, it is challenging to assign selected signals to physiological substances of fingerprint residues. One year later, Voelcker et al. employed the MALDI‐ToF/ToF MS technique to achieve nanostructural imaging on Ag layers (0.4–3.2 nm) coated porous wafer silicon (Ag‐coated pSi). Mass accuracy of this method was improved by more than an order of magnitude and thereby could visualize fingerprints along with their level 3 details.
ToF‐SIMS imaging techniques ToF‐SIMS has superior spatial resolution and do less destruction to fingerprint samples than the MALDI‐MS imaging technique. It was initially introduced by Bailey et al. with a discussion about the feasibility of fingerprint detection using such mass spectrometry method. The three scenarios illustrated by this group delivered a good signal that fingerprints could be enhanced using ToF‐SIMS even for those which were poorly developed with conventional methods. In the year 2017, Graphene oxide (GO)‐enhanced ToF‐SIMS was reported to detect poison, alkaloids (>600 Da) and controlled drugs, and antibiotics (>700 Da) of relatively high mass molecules in contaminated fingerprints as well as endogenous substances (Na + , K + ). Delicate fingerprint characteristics reaching the third level are obtained as presented in Figure . The pore sizes, shapes and distribution could be clearly observed. The pore in Figure (d) was a triangle whilst seemed round in Figure (e). Another group then attempted to broaden its application to fingerprints left on the stainless steel. Results showed it was capable of identifying pore level details even for those who were deposited for 26 days. Very recently, Li's group has attempted to image fingerprints on banknotes according to the signal molecular ions and fragment ion peaks of both endogenous chemicals and contaminants. Certainly, the pore structure could also be captured rough substrates more than smooth surfaces. Although the methods mentioned have exceptional performance in sweat pore detection, the long scanning time might hinder the implementation in forensic investigation practice.
Electrochemical techniques Over the past two decades, unavoidable background interference has driven the development of electrochemical methods for fingerprint imaging. Intriguingly, two methods, electrochemiluminescence (ECL) and scanning electrochemical microscopy (SECM) techniques have already been reported to accurately and reliably visualize level 3 features owing to their sensitivity, good controllability and low toxicity. Below is the achieved progress in the imaging of third‐level characteristics through those methods. 3.4.1 Fingerprint level 3 detail imaging by ECL ECL is commonly generated by certain electrochemical reactions triggered by a potential. Su's group pioneered the application of ECL to fingerprint visualization in 2012. In principle, the sebaceous residues of fingerprints on conductive substrates act as a mask or template. By spatially controlling the ECL reactions to occur in either the bare surface or ridge‐occupied area, the negative mode and positive mode of fingerprint imaging can be obtained (Figure12(a)). As demonstrated in Figure (b), the sebaceous fingerprints of seven months old could be clearly visualized with partial sweat pores located along the ridges. They also performed ECL by using a highly electrochemiluminescent molecule, namely rubrene. In positive mode, the papillary ridges illuminated ECL with the dark background, eventually generating a fingerprint impression whose level 3 features could be identified. Although ECL is rapid and sensitive for imaging LFPs, it is restrictive to only conductive substrates. 3.4.2 Fingerprint level 3 detail imaging by SECM SECM has been successfully applied to electrochemical image substrate topography and local reactivity with high resolution. It has been proven by Girault's group that silver‐stained proteins on PVDF membranes can be visualized by recording the tip current signal generated by the oxidation of the mediator K 3 IrCl 6 . Afterward, this group initially performed SECM imaging on silver‐stained fingerprints. Satisfactory results were obtained despite excessive silver staining and feature being covered. Our group has been working on label‐free fingerprint imaging using SECM without pre‐treatment procedures, such as silver‐staining. Theoretically, the mediator methyl viologen could selectively react with the electroactive species residues rather than furrow regions, resulting in the sharp contrast current change between ridges and furrows. Figure (d) and (e) illustrate the feasibility of this label‐free method for visualizing level 3 details. More interestingly, the fingerprints deposited on other surfaces such as glass could be imaged after conducting the membrane‐lifting procedure. Additionally, the imaging time could be reduced if we combined optical microscopy methods once fingerprints were transferred by the NC membrane.
Fingerprint level 3 detail imaging by ECL ECL is commonly generated by certain electrochemical reactions triggered by a potential. Su's group pioneered the application of ECL to fingerprint visualization in 2012. In principle, the sebaceous residues of fingerprints on conductive substrates act as a mask or template. By spatially controlling the ECL reactions to occur in either the bare surface or ridge‐occupied area, the negative mode and positive mode of fingerprint imaging can be obtained (Figure12(a)). As demonstrated in Figure (b), the sebaceous fingerprints of seven months old could be clearly visualized with partial sweat pores located along the ridges. They also performed ECL by using a highly electrochemiluminescent molecule, namely rubrene. In positive mode, the papillary ridges illuminated ECL with the dark background, eventually generating a fingerprint impression whose level 3 features could be identified. Although ECL is rapid and sensitive for imaging LFPs, it is restrictive to only conductive substrates.
Fingerprint level 3 detail imaging by SECM SECM has been successfully applied to electrochemical image substrate topography and local reactivity with high resolution. It has been proven by Girault's group that silver‐stained proteins on PVDF membranes can be visualized by recording the tip current signal generated by the oxidation of the mediator K 3 IrCl 6 . Afterward, this group initially performed SECM imaging on silver‐stained fingerprints. Satisfactory results were obtained despite excessive silver staining and feature being covered. Our group has been working on label‐free fingerprint imaging using SECM without pre‐treatment procedures, such as silver‐staining. Theoretically, the mediator methyl viologen could selectively react with the electroactive species residues rather than furrow regions, resulting in the sharp contrast current change between ridges and furrows. Figure (d) and (e) illustrate the feasibility of this label‐free method for visualizing level 3 details. More interestingly, the fingerprints deposited on other surfaces such as glass could be imaged after conducting the membrane‐lifting procedure. Additionally, the imaging time could be reduced if we combined optical microscopy methods once fingerprints were transferred by the NC membrane.
Applications of Level 3 Features With the rapid development of level 3 detail imaging techniques, tremendous interest has been aroused in exploring potential applications of revealed level 3 details. So far, many articles have demonstrated they are not only useful for individualization, particularly in fragmentary fingerprints, but also provide valuable information about donor profiling, fingerprint age determination, spoof fingerprint differentiation, as well disease diagnosis. In this section, we give a brief introduction to the applications that have already been investigated and then provide a detailed summary in Table . It should be pointed out that the involving matching algorithm details of application in personal identification will not be included in this section as many reviews have already covered this part. 4.1 Individualization There is growing interest in utilizing level 3 details for fingerprint recognition, especially for those with fragmentary impressions. Jain et al. indicated that the error matching rate declined by 20 % when combining level 3 features with levels 1–2 features. Among the various level 3 features, pores have received huge attention. Back in 1912, Locard claimed that 20–40 pores are enough to give a personal identification opinion. From then on, many pore‐based matching algorithms emerged as the implementation of high‐resolution fingerprint images..[ , , , , , , , , ] Since pore shapes and sizes vary from one impression to another, the pore position is mostly used in fingerprint matching and improve the comparison accuracy to some extent. Current pore‐based fingerprint comparison systems mainly rely on two algorithms: alignment‐based pore comparison algorithm and direct pore (DP) comparison algorithm. Unfortunately, pore comparison is still a challenging issue because the pore alignment accuracy and only local feature extraction heavily affect the comparison result. Additionally, a very limited number of studies focused on other types of level 3 features have also been reported. Jorgenson reported one fingerprint with a limited number of minutiae (3–5 minutiae) was successfully identified by a combination of shapes of edges and minutiae. Reneau then published a case where a fingerprint with no minutiae was differentiated through edge shapes and secondary ridges matching. Meanwhile, substantial efforts were devoted to exploring algorithms relying on ridge counter, incipient ridge, and creases.[ , , , , ] Our group recently proposed a new parameter termed “frequency distribution of the distance between adjacent sweat pores” (FDDasp), which was used for describing the pore‐to‐pore location. The parameter was highly identifiable and thus applied to differentiate two fingerprint fragments whose minutiae were the same. As Figure illustrated, the pore‐to‐pore distances of the two fragments were not consistent. In combination with other characteristics such as edge shape, we ultimately gave an opinion that the fingerprints were from different fingertips. In the future study, more fingerprint samples should be included to further verify the identifiability of the proposed parameter. Meanwhile, larger area of one fingerprint sample should be statistically explored such as the FDDasp in different regions of the same fingerprint. 4.2 Donor profiling and fingerprint age determination Fingerprint level 3 details offer additional information than just identification such as donor gender, donor age, donor race, the time since fingerprint deposition, etc. Hence, it is of vital importance to provide related research below. The value of the level 3 feature in sex determination has been evaluated in several studies. Nagesh et al. examined included the fingerprint samples of 230 Indians and reported there was no significant difference in the sweat pore sizes and frequency between both the males and females. To be specific, the pore frequency of females and males were 8.40 and 8.83 pores/cm ridge, respectively. The pore sizes of males ranged from 69 μm to 284 μm and those of females were 66–287 μm. Preethi et al. found pore number less than or equal to 32 pores/cm 2 was more likely to be of male origin, whereas more than or equal to 36 pores/cm 2 was more likely to be of female origin. No significant difference was detected in pore types and shapes. Kumar and his co‐workers conducted a study to observe the pore shapes of left thumb ink impressions. They found the pore number was 2–4 pores/cm ridge, which demonstrated no difference in both males and females. However, circular or round pores possessed higher occurrence in males than females. Wang et al. detected the shift of pore location that the maximum longitudinal and transverse location shifts of males were 166.46 μm and 61.00 μm while those of the females were 73.08 μm and 45.88 μm. Additionally, another study undertaken by the same group also indicated the pore sizes of males were larger than those of females. Murlidharf concluded that ridge shapes had a certain advantage over the poroscopy in sex determination. The reason may be because the sample number of males whose 1 cm ridge had one concave edge was higher than that of females. Level 3 details were found useful in age determination. Nagesh et al. found pore size gradually increased, and the pore position and pore shape varied with the age. Level 3 features also have a relation to group differentiation. Singh et al. studied fingerprints deposited from Brahmins and Rajputs of Himachal Pradesh. They finally concluded that the pore size was different in both communities while no significant difference in pore frequency, interspacing, size, shape, and position. Very recently, in the work of Govindarajulu et al., the ridge width of eleven criminals was found to vary in right and left hands while no significant differences were detected in normal people. It has been observed that ridge topography may change with latent fingerprints age advanced. It has been reported by Preda and his co‐workers that the ridge suffered narrowing and a loss in ridge continuity over time. 4.3 Other applications In addition to the above applications, level 3 features were applied to ascertain if a fingerprint is a forgery. Champod et al. proved the presence or absence of pores could be reasonably used in discriminating genuine from spoof transactions. Additionally, several studies have indicated that the pore characteristics are associated with some sweating‐related medical diseases and thus have the potential to early diagnose such diseases.
Individualization There is growing interest in utilizing level 3 details for fingerprint recognition, especially for those with fragmentary impressions. Jain et al. indicated that the error matching rate declined by 20 % when combining level 3 features with levels 1–2 features. Among the various level 3 features, pores have received huge attention. Back in 1912, Locard claimed that 20–40 pores are enough to give a personal identification opinion. From then on, many pore‐based matching algorithms emerged as the implementation of high‐resolution fingerprint images..[ , , , , , , , , ] Since pore shapes and sizes vary from one impression to another, the pore position is mostly used in fingerprint matching and improve the comparison accuracy to some extent. Current pore‐based fingerprint comparison systems mainly rely on two algorithms: alignment‐based pore comparison algorithm and direct pore (DP) comparison algorithm. Unfortunately, pore comparison is still a challenging issue because the pore alignment accuracy and only local feature extraction heavily affect the comparison result. Additionally, a very limited number of studies focused on other types of level 3 features have also been reported. Jorgenson reported one fingerprint with a limited number of minutiae (3–5 minutiae) was successfully identified by a combination of shapes of edges and minutiae. Reneau then published a case where a fingerprint with no minutiae was differentiated through edge shapes and secondary ridges matching. Meanwhile, substantial efforts were devoted to exploring algorithms relying on ridge counter, incipient ridge, and creases.[ , , , , ] Our group recently proposed a new parameter termed “frequency distribution of the distance between adjacent sweat pores” (FDDasp), which was used for describing the pore‐to‐pore location. The parameter was highly identifiable and thus applied to differentiate two fingerprint fragments whose minutiae were the same. As Figure illustrated, the pore‐to‐pore distances of the two fragments were not consistent. In combination with other characteristics such as edge shape, we ultimately gave an opinion that the fingerprints were from different fingertips. In the future study, more fingerprint samples should be included to further verify the identifiability of the proposed parameter. Meanwhile, larger area of one fingerprint sample should be statistically explored such as the FDDasp in different regions of the same fingerprint.
Donor profiling and fingerprint age determination Fingerprint level 3 details offer additional information than just identification such as donor gender, donor age, donor race, the time since fingerprint deposition, etc. Hence, it is of vital importance to provide related research below. The value of the level 3 feature in sex determination has been evaluated in several studies. Nagesh et al. examined included the fingerprint samples of 230 Indians and reported there was no significant difference in the sweat pore sizes and frequency between both the males and females. To be specific, the pore frequency of females and males were 8.40 and 8.83 pores/cm ridge, respectively. The pore sizes of males ranged from 69 μm to 284 μm and those of females were 66–287 μm. Preethi et al. found pore number less than or equal to 32 pores/cm 2 was more likely to be of male origin, whereas more than or equal to 36 pores/cm 2 was more likely to be of female origin. No significant difference was detected in pore types and shapes. Kumar and his co‐workers conducted a study to observe the pore shapes of left thumb ink impressions. They found the pore number was 2–4 pores/cm ridge, which demonstrated no difference in both males and females. However, circular or round pores possessed higher occurrence in males than females. Wang et al. detected the shift of pore location that the maximum longitudinal and transverse location shifts of males were 166.46 μm and 61.00 μm while those of the females were 73.08 μm and 45.88 μm. Additionally, another study undertaken by the same group also indicated the pore sizes of males were larger than those of females. Murlidharf concluded that ridge shapes had a certain advantage over the poroscopy in sex determination. The reason may be because the sample number of males whose 1 cm ridge had one concave edge was higher than that of females. Level 3 details were found useful in age determination. Nagesh et al. found pore size gradually increased, and the pore position and pore shape varied with the age. Level 3 features also have a relation to group differentiation. Singh et al. studied fingerprints deposited from Brahmins and Rajputs of Himachal Pradesh. They finally concluded that the pore size was different in both communities while no significant difference in pore frequency, interspacing, size, shape, and position. Very recently, in the work of Govindarajulu et al., the ridge width of eleven criminals was found to vary in right and left hands while no significant differences were detected in normal people. It has been observed that ridge topography may change with latent fingerprints age advanced. It has been reported by Preda and his co‐workers that the ridge suffered narrowing and a loss in ridge continuity over time.
Other applications In addition to the above applications, level 3 features were applied to ascertain if a fingerprint is a forgery. Champod et al. proved the presence or absence of pores could be reasonably used in discriminating genuine from spoof transactions. Additionally, several studies have indicated that the pore characteristics are associated with some sweating‐related medical diseases and thus have the potential to early diagnose such diseases.
Summary and Future Prospects Fingerprints carry sufficient and reliable discriminative characteristics which ensure their status in individualization. Over the past years, advances in analytical instruments and new technologies have accelerated the development of forensic chemistry, especially in level 3 characteristic detection and analysis. The visualization and application of level 3 features prove that the third‐level features give additional information (gender, age, race, health, etc.) about the donor than just individualization. In this review, four main sections are organized. The first part provides a brief introduction of the level 3 feature types along with the assessment of their quality and reliability. The second section summarizes the related techniques for detecting third‐level features such as physical interaction methods, residue‐responsive reagents, MS methods and electrochemical techniques. The third part highlights the application of level 3 characteristics, especially in personal identification, donor profiling (age, sex, race, etc.), fingerprint age determination, spoof fingerprint discrimination and even the diagnosis of sweat‐related disease. Although considerable state‐of‐the‐art achievements have been attained in the third‐level related field, the third‐level details are rarely utilized during the fingerprint identification process. The reasons are listed below: (i) The current visualization reagents for LFPs or deposition methods cannot well display the third‐level structures. For example, the powder reagent is frequently used for latent fingermark visualization relying on the electrostatic adsorption between the powder and fingerprint residues. Unfortunately, the powder easily aggregates and inevitably adheres to certain pore regions, resulting in the distortion of some microscopic details as well. Additionally, the traditional ink deposition method will contaminate fingertips and more importantly, excess ink will cover the level 3 features. (ii) Usually, the fingermarks are left at crime scenes with poor quality, whose level 3 features are insufficient or not well‐reflected and thus can't be extracted for the following identification procedure. (iii) Besides, fingerprint images in fingerprint databases are routinely captured at the resolution of 500 ppi which cannot meet the standards of third‐level feature extraction. In such a situation, the comparison can not be achieved even if the fingermarks at crime scenes possess enough level 3 features. (iv) Last but not least, no systematic and mature analytical methods have been developed for level 3 features. Although the emerging high‐resolution (≥1000 ppi) fingerprint imaging techniques have facilitated the growth of third‐level‐feature based algorithms, it is still a long way to go for improving the comparison accuracy. For instance, the pore alignment accuracy and only local feature extraction heavily affect the comparison result of pore‐based algorithms. Hence, several challenging issues need to be resolved before the implementation of level 3 features. Specifically, (i) developing reliable visualization methods that allow effective extraction of level 3 features. First, as PVA‐based or PDA‐based papers exhibited exceptional performance in sweat pore mapping, whether they can be an alternative to forensic tape should be clarified in the future work. Second, some novel detection techniques such as MS imaging and SECM techniques show outperformance in level 3 feature detection, nevertheless, their long scanning time might hinder the implementation in forensic investigation practice. Hence, it is urgent to develop time‐saving imaging strategies such as changing the scanning path into the zigzag or spiral mode to enable large‐area imaging. Additionally, novel tips such as soft probes should be explored for using SECM to scan delicate samples with topographic sample features. Third, the compatibility of mentioned detection techniques with DNA analysis should constitute the further development steps to be investigated. (ii) Utilizing high‐resolution fingerprint imaging or capture techniques in fingerprint database construction. Only in this way, can level 3 features of fingerprint samples in the database be extracted and compared with the fingermarks at crime scenes. (iii) Exploring multi parameters for the third level detail analysis and improving the accuracy of level‐3‐feature‐based algorithms. The concept of level 3 details is often limited to the sweat pores which easily leads to information missing, whereas it can be broadened to ridge counters, such as the angle of bifurcations. We believe the application scenario can be expanded as more level 3 parameters are systematically investigated. (iv) Establishing standard measurement methods. Current research adopted various measurement methods of level 3 parameters. Under such circumstances, the comparison among different studies cannot be achieved. Hence, it is urgent to find out a scientific measurement method and unify it in future work. (v) As many fingerprint samples as possible should be investigated to screen out the characteristic parameters and verify the accuracy of prediction results as well. (vi) Focusing not only on level 3 details but other fingerprint information. Since the analysis methods of the third level details are still evolving and many mentioned techniques provide the chemical information of residues more than just physical image patterns, more parameters involving fingerprint patterns, minutiae and chemical components should be simultaneously considered and combined with level 3 features to support more robust individualization, donor profiling, spoof fingerprint differentiation, etc.
The authors declare no conflict of interest.
Hongyu Chen received her masters degree in Forensic Science at Criminal Investigation Police University of China in 2020. She is now a PhD student at School of Chemistry and Biological Engineering, University of Science and Technology Beijing. Her research interests mainly focus on the visualization and analysis of multidimensional information in latent fingerprints such as level 3 features, endogenous/exogenous fingerprint residues, donor profiling, fingermark age and DNA profiles .
Rongliang Ma is now a professor at Institute of Forensic Science, Ministry of Public Security in Beijing. He studied at Centre for Forensic Science, University of Technology Sydney (UTS), Australia and received his PhD from UTS in 2012. His research interest and activities include the detection of latent fingermarks, the practice and theories of fingerprint identification and Automated Fingerprint Identification System (AFIS), and fingerprint intelligence. He is a member of the Interpol AFIS Expert Working Group and the Chair of Fingerprint Workgroup (FW) of Asian Forensic Science Network (AFSN). He is also an adjunct professor of several Chinese Universities .
Meiqin Zhang is currently a professor at University of Science and Technology Beijing. After a PhD at Peking University in 2006, she pursued her research as a postdoc at the Ecole Polytechnique Fédérale de Lausanne (Switzerland) from 2006 to 2007 and a researcher of ‘Marie Curie Incoming International Fellowship’ at the University of Warwick (UK) from 2007 to 2009. Her research activities include electrochemistry at liquid‐liquid interfaces, latent fingerprints development and imaging, development and application of scanning electrochemical microscopy .
|
Changes in hand function and health state utility after cubital tunnel release using the United Kingdom Hand Registry | c00db987-b4e4-4e31-b31b-3da2da8ec021 | 11849257 | Surgical Procedures, Operative[mh] | Cubital tunnel syndrome (CuTS) is the second most common compression neuropathy of the upper extremity in the United Kingdom (UK), with an estimated annual incidence of 44/100,000 persons . If symptoms are not adequately improved using non-surgical modalities, surgical decompression can be considered . There are multiple surgical treatment options available. Recent work suggests that in-situ release is the safest operation and equally as effective as other procedures, but this topic remains controversial . Patient-reported outcome measures (PROMs) are questionnaires that evaluate a patient’s health in generic or condition-specific terms and can be used to evaluate the effectiveness of treatments, including surgery. In CuTS, there are a large variety of PROMs used. Previous research has found an improvement in patient-reported symptoms and hand function after surgical decompression as measured by the Boston Carpal Tunnel Questionnaire (BCTQ) and Disabilities of the Arm, Shoulder, and Hand (DASH) questionnaires . However, few studies have evaluated the change in health-related quality of life (HR-QoL) after surgical treatment for CuTS . HR-QoL itself can be assessed by quantifying the desirability of health states described by PROM responses for a given population (such as UK inhabitants). For example, having moderate symptoms of anxiety might be less desirable than having no mobility but more desirable than always having intense pain. This preference-based scoring is termed ‘health state utility’ (HSU) . It is possible that this may vary between countries, so different HSU value sets exist for different countries . HSU data are important to facilitate health economic processes, such as cost-utility analyses , which might also aid in determining the optimal treatment strategy for patients with CuTS. Therefore, the aim of the present study was to evaluate the change in HSU in the first 6 months after surgical treatment for CuTS using the national hand registry of the UK. In addition, we evaluated the change in hand symptoms using the Patient Evaluation Measure (PEM). Study design This observational cohort study uses data from the United Kingdom Hand Registry (UKHR) database, a voluntary national registry for quality assurance of surgical treatment outcomes for hand and wrist conditions. The data were prospectively obtained and retrospectively analysed. Patients who agreed to participate in the registry were asked to complete PROMs preoperatively and at predefined timepoints (2 and 6 months) postoperatively. Originally, PROMs were completed and returned by post. In 2018, the registry was updated, and PROM data were collected by email. For patients without email, PROM responses could be captured using Short Message Services (SMS). Results were collated by a central administrator independent of the operating surgeons. Each patient provided written consent before inclusion into the registry, and identifiable data were anonymized before release from the registry for analysis. This study was exempt from ethical approval by the University of Oxford Clinical Trials and Research Governance. The study is reported following the Reporting of studies Conducted using Observational Routinely-collected Data (RECORD) guidelines . Patients All consecutive adult patients who participated in the registry between February 2012 and February 2020 and received cubital tunnel surgery were identified and evaluated for eligibility. The exclusion criteria for this study were as follows: (1) patients with missing demographics at baseline; (2) patients who underwent cubital tunnel decompression as part of revision surgery; (3) patients who did not complete PROMs at baseline and at least once after surgery; and (4) surgical variants for which less than 30 patients existed in the database (i.e. cubital tunnel decompression with submuscular transposition and/or medial epicondylectomy). Intervention All patients underwent cubital tunnel surgery as chosen in conjunction with their operating surgeon. Operative details were uploaded to the UKHR online platform ( https://www.ukhr.net ). This study only evaluated in-situ cubital tunnel decompression and decompression with subcutaneous transposition. PROMs Two PROMs are captured in the UKHR: the five-level EuroQol five-dimensional descriptive system (EQ-5D-5L) and the PEM. Patients undergoing cubital tunnel surgery were asked to complete both PROMs at intake and at 2 months and 6 months postoperatively. The EQ-5D-5L is a generic health status measure evaluating five dimensions of health (1: mobility; 2: self-care; 3: usual activities; 4: pain/discomfort; 5: anxiety/depression) representing the global HR-QoL (EuroQol–a new facility for the measurement of health-related quality of life, 1990; ). The preference-based scoring of the EQ-5D-5L for the UK, i.e. the UK utility index set, ranges from −0.594 to 1.0, where 1 reflects the best health state utility imaginable, 0 is ‘death’ and negative values are considered worse than death . The PEM measures hand function . Between 2012 and 2017, the UKHR captured the original 10-item version of the PEM . This was changed to the updated 11-item version of the PEM in 2017 , which has an additional question concerning the duration of pain. As this item was missing for most of the patients in the registry, we chose to use complete response sets of the original 10-item for the analysis. Our analysis did not include parts 1 or 3 of the PEM questionnaire as these parts measure the care process and are more akin to a patient-reported experience measure, not a hand function measure. Data-access and cleaning methods We had access to participants’ demographics (sex, age), surgical procedure and item-level data for both PROMs at each timepoint. The EQ-5D-5L utility index was calculated using the UK value set for each timepoint using the EQ-5D-3L crosswalk value . The PEM total score was calculated as the sum of the item response scores (range of possible scores 10–70; lower scores indicate better hand performance). Study size and statistical analyses The study size was determined by convenience, as the number of eligible patients added to the registry between February 2012 and July 2020. We performed a linear mixed-effects model (LMM) for repeated measures to estimate the change in PROM scores between intake and each postoperative timepoint. Two LMMs were made: one for the EQ-5D-5L and one for the PEM. The fixed effects of the model were timepoint, age and sex, while the random effect was the individual patient. The estimated marginal mean, including a 95% confidence interval (CI), was computed for each timepoint and compared with Tukey’s adjustment for multiple testing. Scores were calculated for all patients as one group and stratified based on the surgical procedure. The outcomes of different surgical procedures were not compared, because we had no access to data that informed why one surgical approach had been chosen over another. Missing data (approximately 15%) were not imputed as they do not have added value in LMM . A p -value <0.05 was considered statistically significant. Baseline characteristics of patients included in the EQ-5D-5L and PEM analysis were compared using effect sizes and the overlap between the cohorts was calculated. Lastly, a sensitivity analysis was performed in which the LMMs were repeated with only the subgroup of patients who completed both the PEM and EQ-5D-5L. Assessment of differences between completers and non-completers As participation in the BSSH registry is voluntary, missing data were expected . To evaluate the potential risk of selection bias from loss to follow-up, the cohort of patients who completed PROMS after surgery (completers [C]) and patients who did not (non-completers [NC]) were compared at baseline using Cohen’s D effect sizes for numeric data and Cliff’s delta for categorical data . Estimating the MIC for the PEM in this patient cohort We attempted to estimate the minimal important change (MIC) for the PEM to evaluate whether the observed change was also clinically relevant. We calculated the MIC as half the standard deviation (SD) of the PEM at baseline , which is commonly used in clinical trials. This observational cohort study uses data from the United Kingdom Hand Registry (UKHR) database, a voluntary national registry for quality assurance of surgical treatment outcomes for hand and wrist conditions. The data were prospectively obtained and retrospectively analysed. Patients who agreed to participate in the registry were asked to complete PROMs preoperatively and at predefined timepoints (2 and 6 months) postoperatively. Originally, PROMs were completed and returned by post. In 2018, the registry was updated, and PROM data were collected by email. For patients without email, PROM responses could be captured using Short Message Services (SMS). Results were collated by a central administrator independent of the operating surgeons. Each patient provided written consent before inclusion into the registry, and identifiable data were anonymized before release from the registry for analysis. This study was exempt from ethical approval by the University of Oxford Clinical Trials and Research Governance. The study is reported following the Reporting of studies Conducted using Observational Routinely-collected Data (RECORD) guidelines . All consecutive adult patients who participated in the registry between February 2012 and February 2020 and received cubital tunnel surgery were identified and evaluated for eligibility. The exclusion criteria for this study were as follows: (1) patients with missing demographics at baseline; (2) patients who underwent cubital tunnel decompression as part of revision surgery; (3) patients who did not complete PROMs at baseline and at least once after surgery; and (4) surgical variants for which less than 30 patients existed in the database (i.e. cubital tunnel decompression with submuscular transposition and/or medial epicondylectomy). All patients underwent cubital tunnel surgery as chosen in conjunction with their operating surgeon. Operative details were uploaded to the UKHR online platform ( https://www.ukhr.net ). This study only evaluated in-situ cubital tunnel decompression and decompression with subcutaneous transposition. Two PROMs are captured in the UKHR: the five-level EuroQol five-dimensional descriptive system (EQ-5D-5L) and the PEM. Patients undergoing cubital tunnel surgery were asked to complete both PROMs at intake and at 2 months and 6 months postoperatively. The EQ-5D-5L is a generic health status measure evaluating five dimensions of health (1: mobility; 2: self-care; 3: usual activities; 4: pain/discomfort; 5: anxiety/depression) representing the global HR-QoL (EuroQol–a new facility for the measurement of health-related quality of life, 1990; ). The preference-based scoring of the EQ-5D-5L for the UK, i.e. the UK utility index set, ranges from −0.594 to 1.0, where 1 reflects the best health state utility imaginable, 0 is ‘death’ and negative values are considered worse than death . The PEM measures hand function . Between 2012 and 2017, the UKHR captured the original 10-item version of the PEM . This was changed to the updated 11-item version of the PEM in 2017 , which has an additional question concerning the duration of pain. As this item was missing for most of the patients in the registry, we chose to use complete response sets of the original 10-item for the analysis. Our analysis did not include parts 1 or 3 of the PEM questionnaire as these parts measure the care process and are more akin to a patient-reported experience measure, not a hand function measure. We had access to participants’ demographics (sex, age), surgical procedure and item-level data for both PROMs at each timepoint. The EQ-5D-5L utility index was calculated using the UK value set for each timepoint using the EQ-5D-3L crosswalk value . The PEM total score was calculated as the sum of the item response scores (range of possible scores 10–70; lower scores indicate better hand performance). The study size was determined by convenience, as the number of eligible patients added to the registry between February 2012 and July 2020. We performed a linear mixed-effects model (LMM) for repeated measures to estimate the change in PROM scores between intake and each postoperative timepoint. Two LMMs were made: one for the EQ-5D-5L and one for the PEM. The fixed effects of the model were timepoint, age and sex, while the random effect was the individual patient. The estimated marginal mean, including a 95% confidence interval (CI), was computed for each timepoint and compared with Tukey’s adjustment for multiple testing. Scores were calculated for all patients as one group and stratified based on the surgical procedure. The outcomes of different surgical procedures were not compared, because we had no access to data that informed why one surgical approach had been chosen over another. Missing data (approximately 15%) were not imputed as they do not have added value in LMM . A p -value <0.05 was considered statistically significant. Baseline characteristics of patients included in the EQ-5D-5L and PEM analysis were compared using effect sizes and the overlap between the cohorts was calculated. Lastly, a sensitivity analysis was performed in which the LMMs were repeated with only the subgroup of patients who completed both the PEM and EQ-5D-5L. As participation in the BSSH registry is voluntary, missing data were expected . To evaluate the potential risk of selection bias from loss to follow-up, the cohort of patients who completed PROMS after surgery (completers [C]) and patients who did not (non-completers [NC]) were compared at baseline using Cohen’s D effect sizes for numeric data and Cliff’s delta for categorical data . We attempted to estimate the minimal important change (MIC) for the PEM to evaluate whether the observed change was also clinically relevant. We calculated the MIC as half the standard deviation (SD) of the PEM at baseline , which is commonly used in clinical trials. Between February 2012 and July 2020, 565 unique patients with CuTS were entered into the registry. Of those patients, 281 (50%) fulfilled the inclusion criteria for the analysis of EQ-5D-5L scores and 268 (47%) for the analysis of PEM scores. There was an 82% overlap in patients between the two analyses. The reasons for exclusion are summarized in . Retention rates at 6 months after surgery were high, in the range of 74%–83%; the number of patients analysed at each timepoint is shown in . The demographics of patients included in the EQ-5D-5L and PEM analyses are shown in , which confirms that patients in both groups were similar (Cohen’s |d| < 0.2 ‘negligible differences’). In addition, the included patients (C) were similar in sex, operation type and baseline scores to patients who were not included (NC), but somewhat older (Table S1). The mean EQ-5D utility index did not show significant improvement with a value of 0.63 (95% CI: 0.60 to 0.67) at intake, 0.67 (95% CI: 0.63 to 0.68) at 2 months and 0.65 (95% CI: 0.61 to 0.68) at 6 months ( p = 0.99) . Mean EQ-5D-5L utility index scores were similar for the two types of operation . The mean PEM score improved from 41 (95% CI: 39 to 42) at intake to 29 (95% CI: 27 to 31) at 2 months ( p < 0.001) . This improvement was larger than 0.5 SD at intake (6.5), indicating a clinically relevant change. At the 6-month follow-up, the mean PEM score was 30 (95% CI: 28 to 32), which was not significantly different from the 2-month follow-up ( p = 0.99). Improvements in PEM scores were seen for both types of surgery . The sensitivity analyses on the 231 patients who filled in both PROMs showed similar estimations for the PEM and EQ-5D-5L (Table S2), indicating the robustness of the results. This study shows that for patients with CuTS, in-situ decompression with or without subcutaneous transposition demonstrates a clinically relevant improvement in hand symptoms (as measured with the PEM) at 2 months postoperatively, which remains at 6 months. However, improvements in hand symptoms were not paralleled in generic health state utility (as measured by the EQ-5D-5L). There are two possible explanations for this important discrepancy. It may be that the EQ-5D-5L is not sensitive to meaningful changes in hand function after treatment. Alternatively, the improvement in hand function demonstrated by the PEM change is not perceived to be of high value to the UK population. Currently, evaluation of changes in HR-QoL and health economic analyses in hand surgery rely on generic preference-based measures (such as the EQ-5D-5L) as no hand-specific preference-based measure exists. Because the EQ-5D-5L is not specific to hand conditions, it might not be able to capture important impacts of hand function on the quality of life. For example, the EQ-5D-5L asks about washing and dressing, usual activities, pain, anxiety and general mobility, but not about elements of health for which the nerve decompression is performed such as pain, weakness or paraesthesia. This means that surgical treatments for hand conditions are at risk of being undervalued when assessed with the EQ-5D-5L for cost-effectiveness analyses. Consequently, treatments might be unfairly labelled as being cost-ineffective or of ‘limited clinical value’ . However, as stated, it may be that the hand function changes are not perceived as important to the general population, and this needs to be considered rationally as well. The discrepancy we have described here requires further exploration and possibly the development of hand-specific preference-based measures for economic health evaluation in hand surgery. This has already been successfully undertaken in the field of breast reconstruction, for similar reasons . Patients with CuTS reported an improvement in hand function at 2 months postoperatively when completing the PEM. The improvement in PEM score was observed for both in-situ decompression and those undergoing subcutaneous transposition, with wider confidence intervals for decompression with subcutaneous transposition owing to a smaller sample size. Improvement in patient-reported symptoms after cubital tunnel surgery seems consistent throughout the literature irrespective of the hand-specific PROM used . Therefore, our results on the improvement of hand symptoms after surgery are in line with previous research. This study has some limitations. First, participation in the registry is currently voluntary, and only a fraction of the patients who undergo cubital tunnel surgery in the UK are entered into the dataset annually. In addition, 26% of the patients who provided preoperative data were lost during follow-up. Therefore, it is uncertain to what extent the results from this study are generalizable to the UK population. To increase participation in and adherence to the registry, administrative burdens for clinicians and patients should be minimized. Therefore, we have recently developed a computerized adaptive test version of the PEM for patients with CuTS and thumb base osteoarthritis to reduce the questionnaire length by 80% . Implementation of these reduced questionnaires may boost response rates and improve the generalizability of the UKHR. Second, the minimal important change (MIC) of the PEM in patients with CuTS had not been reported before. Therefore, we tried to estimate the MIC with a rule of thumb that is commonly used in clinical trials and states that the threshold of discrimination for changes in HR-QoL is consistently approximately half the SD at baseline . A future study should calculate the MIC for the PEM for all common hand conditions using an anchor-based approach to better interpret the clinical meaning of statistical changes. Third, we were unable to investigate which surgical treatment options yield the best outcomes for patients with cubital syndrome. Some operation options (e.g. medial epicondylectomy) were excluded from analyses due to low sample sizes that would lead to inadequate statistical power. Furthermore, the registry does not capture any information on the clinical decision-making to perform one surgery over the other. For example, we did not know whether the treating surgeon always performed transposition or if it was only based on clinically evident subluxation. To make meaningful comparisons between treatment options using the UKHR, more clinical data need to be captured. This study adds to the evidence that valid, responsive, consistent, disease-specific measures should be included in any considerations by funding bodies alongside generic health measures . Future research will focus on the development of a hand-specific preference-based measure that can detect meaningful changes in hand function after treatment to allow health economic evaluations. |
Presidential address 2017 William Harkness FRCS October 10th 2017 Denver, Co USA: 2017—annus mirabilis, a global view of neurosurgery for children | e7995427-4665-4aed-ae4b-a8ac15e6160a | 6133082 | Pediatrics[mh] | In 1667 the English poet, literary critic, translator, and playwright John Dryden published an historical poem entitled ‘annus mirabilis’ or ‘year of miracles’. Dryden was a key figure in Restoration England and wrote the poem whilst in Charlton in Wiltshire where he had taken his family to escape from the Great Plague in London, the last major outbreak of bubonic plague in England which killed 100,000 over an eighteen-month period. Indeed, the impact of the Great Plague was such that Charles II and the whole Royal Court left London firstly to Salisbury and then to Oxford where Parliament sat, leaving London in the hands of the Lord Mayor, Sir John Lawrence, and the aldermen of the city. Exodus for the poor was not as easy and it became increasingly difficult for the poor to leave as the plague ravaged the population. Dryden’s poem was a celebration of the recovery of England from a series of events including not only the plague but also two wars and the Great Fire of London, the description of which formed the second half of the poem. It was considered that The Great Fire was in fact a miracle as the city was saved and Charles pledged to rebuild and improve those parts of the city that had been destroyed and indeed many of London’s most famous landmarks were built following the fire, such as Christopher Wren’s St Pauls cathedral and monument. The remodelled London was never again visited by bubonic plague which may in part have been due to the eradication of poor housing and creation of wider streets. ‘annus mirabilis’ was very well received by the King and in 1668 Dryden was made England’s first Poet Laureate, a title that is still in existence and which over the years has been held by such luminaries as Wordsworth, Tennyson, Masefield and more recently by John Betjeman and Ted Hughes. The poem ‘annus mirabilis’ was a display of optimism despite tragedy and an expression of the expectation of rebirth and better things to come. Dryden’s view was that God had saved London and England from certain destruction and therefore that things could certainly have been worse. I have chosen this title in recognition of my year as President of the ISPN 2016–2017 and, like Dryden, I have hope and expectation that the situation of Global Neurosurgery for Children will improve in the years to come and the right of all children to be able to access safe and optimal surgical treatment for neurosurgical conditions will be achieved.
To be awarded the ISPN Poncho is a great honour and certainly from my own perspective it is the highest point of my career in paediatric neurosurgery. However, the term of presidency is brief and, therefore, to achieve any realistic goals during the period of tenure is only possible by continuity from one president to the next. I was extremely fortunate to have had as my predecessor Prof Graciela Zuccaro and as my successor Prof Graham Fieggen, both of whom share with me a vision of improved neurosurgical care for children on a global scale. It was through Prof Zuccaro that I became involved in the Global Initiative for Children’s Surgery (GICS) and during the course of her presidency, we had many conversations about the role of the ISPN in the context of global surgery and how we might improve membership in Low and Middle Income Countries (LMICs) according to the World Bank classification. We wanted to develop a strategy for the ISPN that could run through several presidential terms and, thus, it was essential that Graham Fieggen was also involved in these discussions. In my role as ISPN President, I was very fortunate to be invited to contribute to scientific meetings, teaching courses for the ISPN and the ESPN and also to be a visiting professor in a number of places. From our very successful 44th annual meeting in Kobe hosted by Prof Mami Yamasaki, I travelled to Myanmar for an ISPN teaching course and this was followed by trips to India, Africa, North and South America, Russia, several European countries and New Zealand. During these travels I attempted to get some idea of the local circumstances and in particular the contrasts in man power and infrastructure in the countries and units that I visited. There are clearly many limitations to this as a process of information gathering but it did allow me to formulate some personal perspectives that I would like to share.
Global health has rightly become a very popular topic in recent years and is the health of populations in the global context; it has been defined as “the area of study, research and practice that places a priority on improving health and achieving equity in health for all people worldwide”. It should be distinguished from public or international health and in its ambitions require not only multidisciplinary collaboration within medical disciplines but also other collaborations which allow the development of adequate infrastructure to facilitate medical care. In 1978 the Alma Ata Declaration was adopted and the aim of ‘Health Care for All’ established. In 1980 Halfdan Mahler, then director-general of the World Health Organisation, first raised the issue of surgery as a part of health care for all but the current global surgery movement was born out the report of the Lancet Commission Global Surgery published in 2015 and the subsequent Global Surgery 2030 . The commission demonstrated the huge inequity in the provision of surgical care, particularly in LMICs. Five billion people lack access to safe, affordable surgical and anaesthetic care when needed and an additional 143 million surgical procedures are needed to redress this deficit. Most importantly, it became recognised that economic growth is being impaired by the lack of adequate resources and an investment of $350 billion would result in an expected economic growth of $12 trillion. At a stroke, the importance of non-communicable disease has been recognised in economic terms and the economists and politicians are taking note. As Dr. Jim Yong Kim, President of the World Bank said in his opening address at the first meeting of the Lancet Commission on Global Surgery “Surgery is an indivisible, indispensable part of health care” and referred to surgery as “the neglected stepchild of Global Health”. At the 68th World Health Assembly (WHA) meeting in 2015, Resolution 68.15 was formally adopted by the 194 member states of the WHO. The resolution was entitled “Strengthening Emergency and Essential Surgical Care and Anesthesia (EESA) as a Component of Universal Health Coverage” and is aimed at addressing the gaps in the provision of surgical and anaesthetic services. For the first time, it is being acknowledged that surgically treatable conditions are responsible for three times more deaths than malaria, tuberculosis and HIV/AIDS combined. The resolution acknowledges that surgical care should be an essential part of universal health coverage and urges governments to make plans for its implementation. The nine clauses of the resolution cover finance, data collection, manpower, education, provision of consumables, infection control and political involvement, by ensuring that Ministries of Health take a lead role in the promoting of EESA. The ten most important needs for the provision of safe surgical and anaesthetic care have been further defined by the Lancet commission and the gap analysis performed has shown that LMICs are those with greatest need in terms of infrastructure, equipment, disposables and manpower. In addition the time taken to access surgical care was mapped and the deficiencies revealed, once again primarily in LMICs. One of the concepts developed from this work has been the definition of Bellwether procedures. These are the procedures which can be used as indicators to gauge the degree of effectiveness of the provision of surgical care. For surgery as whole, they have been defined as caesarean section, laparotomy and treatment of open limb fractures with access to these interventions needed within 2 h. As yet Bellwether procedures for surgical specialities do not exist but if we want to gauge accessibility of the neurosurgical services that we supply to children then these procedures must be defined for our own speciality.
In the core packages for surgical and anaesthetic care described by the Lancet Commission the only neurosurgical procedure listed is “burr holes” for emergency care in the basic trauma package. No neurosurgical procedures of any sort feature elsewhere. In 2015 “Essential Surgery—the World Bank Health Disease Priorities 3” was published and this recognises the importance of head trauma as both a major cause of death and disability with 10 million people suffering traumatic brain injury (TBI) globally every year. In addition, head injuries are commonest in LMICs and 90% of trauma deaths occur in LMICs. The guidelines for management have been, for the most part, drawn up in HICs and do not take into account the resource issues that affect LICs and that for many neuro trauma is often not an attractive surgical speciality. Burr holes again appear in the trauma procedures list whilst shunt for hydrocephalus appears in the congenital section, the only other neurosurgical procedure quoted. WHO Guidelines for Essential Trauma Care published in 2004 was an attempt to create affordable and achievable basic standards of emergency trauma care and describes very well the situation present in many countries where, in the absence of neurosurgeons, TBI is managed by general practitioners and general surgeons. There are some excellent recommendations in this report but it acknowledges that the American Association of Neurological Surgeons (AANS) guidelines, considered to be the gold standard for head injury management, are not achievable in a low resource setting where even basic imaging and surgical facilities may not be present. In their paper “Global Neurosurgery: The Unmet Need” Kee Park, Walt Johnson and Robert Dempsey have outlined the huge unmet need for neurosurgery. Whilst in HICs, the ratio of neurosurgeons is 1 per 80,000 population; data suggests that in LICs that ratio may be as low as 1 per 10 million population. This excellent paper is a “call to arms” for the neurosurgical community to become engaged in Global Surgery. It acknowledges the contribution of organisations such as Foundation for International Education in Neurosurgery (FIENS) and the World Federation of Neurosurgical Societies but also points out that neurosurgical representation in working groups on trauma has been minimal. We are doing well in the areas of neurosurgical education but that is not enough and the paper stresses the need for interdisciplinary working to develop necessary infrastructure and the need for academic research and publications on neurosurgery from LICs. Most importantly, the paper stresses that neurosurgeons need to become advocates for our speciality at a national and international level so that we can really address the unmet need. Neurosurgeons need to produce high quality recommendations appropriate for district level hospitals in LMICs and direct our educational efforts appropriately. Unfortunately, in the paper “Global Neurosurgery: The Unmet Need” there is no mention of children despite the fact that in many LMICs the under 18 population may be as high as 50% and that in LICs the childhood population is rapidly increasing. When it comes to the paediatric population, it follows that the deficiencies found in the provision of care for general neurosurgery are even more of a problem. As these countries are also those with the lowest density of neurosurgeons, there is little chance for true sub-specialisation in paediatric neurosurgery and care for children is therefore shared between paediatric general surgeons and adult trained neurosurgeons. Whilst in some situations, the care may be excellent. There is little doubt that without adequate training in the pathologies of childhood many children will receive poor care or no care at all.
In 2016 the ISPN was invited to be a stakeholder in the fledgling Global Initiative for Children’s Surgery (GICS). This initiative was created by a group of general paediatric surgeons who wished to explore the position of general surgery for children globally. GICS is therefore a consortium of providers, institutions, and allies from both the global north and the global south. It is an inclusive group which believes that children’s surgery includes all provision of surgical care to children, not just paediatric surgery and so GICS includes all specialties and subspecialties involved in children’s surgical care, such as plastic surgery, orthopaedics, anaesthesia, intensive care, radiology, pathology, laboratory medicine, paediatrics, nursing and physical and occupational therapy, as well as neurosurgery. By bringing together those that provide care with the policy makers and administrators, GICS aims to analyse the current state of surgical care in LMICs; develop global, regional, national and local priorities to improve the delivery of surgical care for children in LMICs; and identify and bring together resources to address those global, national, and regional priorities. Two meetings were held in 2016, the first inaugural meeting at the Royal College of Surgeons, London and the second at the American College of Surgeons in Washington. As a result of platform presentations and break-out groups, an essential resources document was drafted which included some neurosurgical detail drafted by those neurosurgeons present. There was also a commitment on behalf of the ISPN to continue engaging with GICS and contribute to further meetings. I attended both of the GICS meetings and reported back to the executive board of the ISPN. I gave the opinion that whilst it was completely appropriate that we engage with adult neurosurgery in the Global Neurosurgery platform, GICS and our paediatric surgical colleagues share many of the same requirements in terms of resources and manpower as we do. It is for this reason that paediatric neurosurgery is, in most developed health care systems, carried out in the paediatric surgical setting within specialist paediatric hospitals rather than in adult neurosurgical wards and units. During the two GICS meetings I met and discussed the provision of surgical care for neurosurgical conditions amongst general surgeons and learnt first-hand how poorly developed the speciality of neurosurgery is in some areas of the world and how much of the care for head injuries, acquired and congenital disorders of the nervous system is provided by general paediatric and orthopaedic surgeons. This reflected on my own experience when starting in Paediatric Neurosurgery in the United Kingdom in the 1980’s when our own sub-speciality was getting started and much congenital work was managed by our paediatric surgical colleagues. The links created through GICS enabled me to establish a dialogue with paediatric surgeons particularly in Sub Sharan Africa where the disparity between the need of the childhood population and provision of neurosurgical care is most pronounced.
Immediately after the ISPN meeting in Kobe, we travelled to Yangon in Myanmar for the first ISPN teaching course of the presidential year and the first to be held in that country. In addition to an excellent ISPN faculty, we were joined by a number of adult colleagues with a history of collaboration with Myanmar. Notable amongst these was Dr. Jack Rock from Detroit who has worked with Prof Win on a number of occasions and is an active member of FIENS and their regional coordinator for Asia. FIENS has been active since 1969 and has placed volunteers in over 22 countries as well as working in partnership with other organisations such as the Neurosurgery and Development Foundation (NED) and the College of Surgeons of East, Central and Southern Africa (COSECSA). Myanmar has been much in the News in recent months notably due to the plight of the Rohingya people, many of whom have fled to Bangladesh thereby creating a further humanitarian disaster, one accentuated by severe flooding. Myanmar is a country of some 54 million people of whom 10.8 million or 20% are children. According to WHO statistics, approximately 1% of the annual GDP of $1275.02 is spent on healthcare and the average life expectancy is 66.04 years. Thirty-six percent of the population is urban. There are 23 trained neurosurgeons in Myanmar but a further 30 in training and so the future looks optimistic. As yet, there are no subspecialist paediatric neurosurgeons although several of the trainees who attended our teaching course expressed a particular interest in paediatric neurosurgery. This stresses the need for education in paediatric neurosurgery pathology, clinical conditions and their management and finally the surgical procedures required to serve the 10.8 million children. Prior to Military rule in 1962, the neurosurgical links were primarily with the UK but the Military government discouraged any outside contact and so there was a distinct change in medical education over the following 45 years. With the recent changes in the political picture in Myanmar, there has been active collaboration internationally and the ISPN teaching course marked a significant landmark in the development of neurosurgical care for children in the country. There are now active neurosurgical collaborations between Myanmar and the USA, India, Japan and Switzerland and I am pleased to report that the course was met with much enthusiasm and a further course is now planned for 2018. In addition, trainees from Myanmar are receiving training overseas and of particular note is the so called “South–South” links with India. South–South collaborations are where trainees receive fellowships in countries that have a similar socioeconomic structure to their own and therefore receive a more pertinent training. Take home message After political isolation, effective collaborative partnerships have been created and the numbers of trained neurosurgeons is rapidly increasing. Paediatric neurosurgery does not yet exist as a sub-speciality but the ISPN teaching course is fulfilling an essential role both in stimulating interest in our sub-speciality as well as providing education. Collaboration with other countries in South Asia to create South–South partnerships has the prospect of developing training opportunities.
After political isolation, effective collaborative partnerships have been created and the numbers of trained neurosurgeons is rapidly increasing. Paediatric neurosurgery does not yet exist as a sub-speciality but the ISPN teaching course is fulfilling an essential role both in stimulating interest in our sub-speciality as well as providing education. Collaboration with other countries in South Asia to create South–South partnerships has the prospect of developing training opportunities.
This is perhaps an appropriate moment to discuss the political role of the ISPN. It is important to stress that the ISPN is an apolitical organisation and will never ally itself to any political party or regime. Our mission statement is clear that “The Mission of the ISPN is to improve the health and welfare of children requiring neurosurgical care throughout the world by scientific research and close international cooperation irrespective of class, colour, creed or economic condition”. The constitution and by-laws go on to state that we will strive to achieve this by: Promoting and supporting effective social, clinical and scientific communication between paediatric neurosurgeons, basic scientists, political and governmental bodies throughout the world. Developing and cementing relationships with other international organisations committed to the improvement in the health and welfare of the sick and underprivileged child. Promoting and developing training schemes at a national and international level in paediatric neurosurgery supported by courses for developing countries and the provision of scholarships and exchange programs. Supporting and promoting continued medical education within the society’s membership ensuring the maintenance of the highest levels of scientific and clinical knowledge. Providing practical support to under-resourced colleagues working with underprivileged children in developing countries. Both local and national politics can have a direct effect upon the health and well-being of children and particularly those with neurosurgical conditions. Whilst the ISPN is apolitical, it can and should contribute to political debate and lobbying either alone or by joining with other societies which share our interests. In particular the ISPN, through its executive board and membership, can comment on political matters affecting the neurosurgical care of children or on public health matters that have an impact on congenital or acquired conditions of the nervous system. Therefore, in matters determining the infrastructure within which we work, pertaining to child’s health and education and in matters of child protection, the ISPN should use its voice on behalf of children who otherwise will have no advocates. Children are not members of the electorate in any country and as such they cannot lobby for themselves and so they are often neglected by politicians. Examples of where the ISPN could have influence might be on the issue of folate supplements in the prevention of neural tube defects, Vitamin K injection at birth and head protection for sport and road use. We should help families in securing a safe and nurturing environment within which their children can develop and lobby so that payment at the point of delivery of surgical care does not result in severe financial hardship for families. The history of neurosurgery is rich with examples of neurosurgeons who have made a major political contribution. Sir Victor Horsley supported movements as diverse as women’s suffrage, the Temperance League, anti-rabies legislation and the British Medical Association, all interspersed with highly significant surgical and physiological advances. He was also renowned for the kindness and understanding that he showed when looking after children. As his biographer remarked “Children understood and trusted him at once: he never chaffed or ‘talked down’ to them and though very gentle and pitiful, he was always bracing and straightforward with a young patient and he seemed able to really see from the child’s point of view”. Sir Hugh Cairns did much to prevent injury to motorcyclists during World War II by lobbying for proper head protection following the death in 1935 of aircraftsman TE Shaw aka TE Lawrence or “Lawrence of Arabia”. Lawrence had been riding his motorcycle without a helmet and sustained fatal injuries when swerving to avoid two children. In a twenty-one-month period prior to World War II, there were 1884 fatalities amongst motorcyclists, two thirds due to head injury and the blackout restrictions of wartime led to a 20% increase in the death toll. Cairns called for the introduction of helmets and in 1941 the army, who had been losing two despatch riders a week, introduced compulsory helmets. More recently Hunt Batjer and Rich Ellenbogen have had an impact in the field of American Football through their work with the National Football League in the USA. In the UK, Sam Galbraith became a prominent labour health spokesman whilst Balaji Sadasivan (Singapore), George Nga Ntafu (Malawi) and my friend Upendra Devkota (Nepal) have all achieved political eminence. In the United States, a paediatric neurosurgeon has even been a presidential candidate (Ben Carson) and is now Secretary of Housing and Urban Development in the Trump administration.
In March 2017, it was my very great honour and pleasure to deliver the Ginde Oration for 2017 during the 28th Annual Congress of the Indian Society for Pediatric Neurosurgery (IndSPN) held jointly with the 2nd meeting of the Asian-Australasian Society of Pediatric Neurosurgery (AASPN). Dr. Ram Ginde was a pioneer Indian Neurosurgeon who joined the staff at Bombay Hospital in 1953 and after whom the Oration was created by Prof Sanat Bagwati in 1991. Along with Jacob Chandy and B Ramamurthi, Ginde did much to further Neurosurgery in India and he was for many years editor of Neurology India and the representative of India at the WFNS for three terms. What is perhaps less well known is the relationship between Ram Ginde and Meher Baba. Meher Baba was born in Pune to Irani Zoroastrian parents and devoted himself to mysticism from the age of 19 years and in 1923 opened his ashram in Meherabad where he opened a school, hospital and dispensary. From the age of 31 until his death aged 54 years, he maintained a code of silence, communicating by an alphabet board or hand gestures. He became popular in Western Culture from the 1940’s onwards and was a vocal opponent to the use of psychoactive drugs in the 1960’s. Meher Baba was involved in two road traffic accidents causing neck and neurological problems and he also suffered from trigeminal neuralgia, for which Ram Ginde was consulted and who subsequently became a lifelong friend and follower. The subject of the Ginde Oration was within the field of Epilepsy Surgery and the contribution of the Neurosurgeon to the Multidisciplinary management team and this was given in the context of both anatomical dissections and a live surgery demonstration given by Dr. Sanjiv Bhatia from Miami and Prof Sarat Chandra of AIIMS, Dehli. The history of paediatric neurosurgery in India is rich with the names of ISPN past presidents such as Sanat Bhagwati and Chandrashekhar Deopujari. In his presidential address during the 43rd ISPN meeting in Izmir in 2015, Prof Deopujari spoke about the history of the ancient civilisations concentrating on India, its problems associated with delivery of healthcare and the vision for the future which is undoubtedly a bright one. The IndSPN has a membership of over 200 on a background of some 1800 neurosurgeons in India, catering for a population of 1.4 billion inhabitants. Life expectancy amongst the Indian population is 68.8 years whilst 1.4% of the annual GDP per capita of $6,09.60 is spent on healthcare. Forty-one percent or 518 million of the population are under 18 years and delivery of care is complicated by 32% of the population being urban and that there are over 20 official languages within India. The Lancet Commission commenting on delivery of surgical care for abdominal emergencies in India stated that the mortality risk was 16 times higher when living 100 km or more from a well-resourced hospital, which demonstrates the challenges of delivery of care that such a huge country presents. This must be even more so for neurosurgical care where a workforce of 1800 is expected to deliver care to a population of 1.4 billion with 21 official languages. From a political standpoint, India is the world’s largest democracy and the current Bharatiya Janata Party (BJP) under Prime Minister Narenda Modi has pledged to increase the current health spending to 2.5% GDP. Neurosurgical training in India has very much followed the lines of training in the UK but there are few paediatric neurosurgical fellowships and few paediatric neurosurgical departments, again a reflection of the paucity of trained neurosurgeons generally. In the past, medically and neurosurgically trained Indian nationals played an important part in delivery of care in the UK National Health Service, joining the junior surgical staff in the delivery of clinical care whilst at the same time expanding their surgical knowledge and expertise. Indian graduates would then return to India to resume their clinical positions. This tradition was changed by the UK joining the European Community as positions had to be preferentially offered to European graduates before graduates from the former commonwealth. This led to considerable dissatisfaction from both the neurosurgical units in the UK and the Indian graduates who were effectively blocked from entering the UK to gain further training. Following the referendum in the UK and the decision to leave Europe, discussions have already begun about how these traditional links can be re-established in neurosurgery for the benefit of both our populations. Take home message The numbers of neurosurgeons in India remain inadequate for the size of population and fellowships in paediatric neurosurgery are needed to ensure adequate training in the speciality. The IndSPN is very active and encourages ISPN teaching courses through their own cycle of education. Excellent teaching programmes within neurosurgical units and centres of excellence in India offer a suitable resource for South–South collaboration within Asia.
The numbers of neurosurgeons in India remain inadequate for the size of population and fellowships in paediatric neurosurgery are needed to ensure adequate training in the speciality. The IndSPN is very active and encourages ISPN teaching courses through their own cycle of education. Excellent teaching programmes within neurosurgical units and centres of excellence in India offer a suitable resource for South–South collaboration within Asia.
I first visited Nepal in 1999 to attend the South Asian Neurosurgical Congress organised by Upendra Devkota. Since childhood, I had wished to visit the mountain kingdom which has always had a special place in the heart of the British people. My great friend and housemate from University, Martin Entwistle, had joined the Royal Army Medical Corps after graduation and I had lunched in the Gurkha Mess in Aldershot on the day immediately before my wedding. The tales of bravery concerning these quiet people are legendary and acknowledged in the extraordinary number of honours awarded to Gurkha soldiers, including no less than 26 Victoria Crosses conferred on Gurhka officers and men. Since that first visit, I have been trekking in Nepal on a number of occasions and have grown to love the landscape and its people. On May 29th 2008, after many years of political upheaval, the Federal Republic of Nepal was declared and since then a new constitution has been agreed. Nepal was hit by a series of earthquakes and aftershocks in April 2015 which killed nearly 9000 people and injured approximately 22,000. The natural catastrophe was made worse by 3.5 million being rendered homeless due to the collapse of buildings and the destruction of the fragile communications links within the country. Kathmandu was badly affected with several World Heritage sites being severely damaged. For this visit, I was accompanied by my youngest daughter (aged 20 years) and the purpose of the trip was to visit some schools being rebuilt by the “In Your Hands” charity. We also travelled out to the worst hit rural areas in Bamti Bhandar, a fourteen-hour drive from Kathmandu, where we visited a school and first aid post. Nepal has a well-developed infrastructure of health posts and primary health centres managed by the Village Development Committees (VDC). However, the earthquake, aftershocks and subsequent flooding had rendered many roads impassable and even though our visit was two years after the event, the evidence of the damage was everywhere and the further away from Kathmandu that we travelled, the less visible impact of the $4 billion in aid money was evident. The bureaucratic process for restitution appeared complex and building supplies and experience both expensive and in short supply. The Federal Republic of Nepal has a population of 29.3 million, of whom 12.3m or 42% are under 18 yrs. Life Expectancy in Nepal is 70.7 years whilst the GDP per capita $729.53, of which 2.3% is spent on healthcare. There are a total of 50 trained neurosurgeons and 8 residents in training and it therefore seems clear that the numbers of trained neurosurgeons and also those in training are inadequate. Twenty percent of the population is urban and 7% of the population live in the Himalayas whereas the concentration of specialist surgical help is necessarily located in the towns. This means that the provision of neurosurgical care is woefully inadequate and many patients have to travel considerable distances in hostile geographical circumstances and are then frequently lost to follow-up. At present provision of paediatric neurosurgical care is confined to trauma, myelomeningocele and hydrocephalus with few resources for the adequate treatment of other conditions. The new federal government of Nepal is committed to resolution 68.15 and has a Joint Country Cooperation Strategy with the WHO. However, at present the words “Too Many, Too Far, Too Poor, Too Late, Too Few to Help, Too Little Done” apply to Nepal and it is only with considerable additional investment in healthcare infrastructure that the objectives of 68.15 can be achieved. One solution may be investment in the natural resources of Nepal, particularly hydroelectric power, whilst to deal with communications and transport problems, the use of digital technology, particularly by the use of telemedicine, may ensure a more equitable distribution of healthcare. Take home message One of the poorest countries in the world with access to surgical care of any sort is hampered by geography as well as lack of infrastructure and personnel. The new Federal Government is actively looking for development partners and it is this that potentially gives the greatest feeling of optimism for the country.
One of the poorest countries in the world with access to surgical care of any sort is hampered by geography as well as lack of infrastructure and personnel. The new Federal Government is actively looking for development partners and it is this that potentially gives the greatest feeling of optimism for the country.
When I was first appointed as a Consultant Neurosurgeon to the National Hospital, Queen Square and to Great Ormond Street Hospital for Sick children, I was given financial support and two scholarships through the Royal College of Surgeons England to visit North America and learn about Epilepsy surgery. For six months in 1991, I was a visiting Professor at University of California in Los Angeles (UCLA) and also visited Miami, Dallas, Seattle and Montreal, although the longest time was spent in Los Angeles where we stayed with our young family. It was an invaluable experience both professionally and personally. The Adult Epilepsy service was run by Prof J “Pete” Engel whilst the Paediatric service was run by Don Shields and both of these academic giants were very much in favour of evolving the surgical side of epilepsy treatment. The paediatric neurosurgeon at the time was Warwick Peacock of dorsal rhizotomy fame. In Miami I was very warmly received by Michael Duchowny, Prasanna Jayakar and Trevor Resnick and it began a long and very fruitful relationship between our academic departments. The final part of my six months sabbatical was spent in Montreal with Andre Olivier and Fred Andermann whose Professional expertise were second to none and whose personal kindness and consideration faultless. Being no stranger to Los Angeles, it was a great pleasure to receive and accept an invitation to speak in the International section of the AANS meeting hosted by Dr. Frederick Boop. It also allowed me the opportunity to visit the incredible Petersen Automotive Museum, where the special display was of an amazing collection of Bugatti cars. An extraordinary visit further enhanced by a visit behind the scenes to the collection of cars not normally on open display. The United States has a population of 323.1m of which 73.6m or 23% are under 18 years. Life expectancy 79 years, GDP per capita $57,466.79, 17.1% GDP spent on healthcare or the equivalent of $10,000 per capita per year but as pointed out in the document “To Err is Human” high spending on healthcare does not necessarily result in improved care and many are unable to access affordable surgical care. Birth rate of the population is declining with a resultant increase in elderly retired population, no longer actively contributing to the taxation pool. This population of 323.1m is served by 4200 neurosurgeons of which 220 board certified in paediatrics, a similar number to the neurosurgeons who are members of the IndSPN serving a population of 1.4 billion. There is a very well integrated infrastructure although universal healthcare in not established and appears to be a political bagatelle. The current administration has recently created barriers to free travel to the USA on the grounds of homeland security and this meant that a number of ISPN members who had papers accepted could not attend and others chose not to attend. Rick Boop is a world recognised Paediatric Neurosurgeon, active ISPN member and an authority on paediatric oncology and as President of the AANS he made the theme of the meeting “Global Neurosurgery”. There was an excellent and inspiring opening session on the Global Neurosurgery theme and from the presentations made it was clear that there are a number of Global surgical Initiatives in Neurosurgery based in the US and that they have been doing excellent work, in some cases for several decades and certainly prior to the report of the Lancet Commission. However, a great number of agencies are involved in this work with little or no ability to cross reference or compare notes on successful or more importantly unsuccessful projects. In the international section, I was able to put forward the hypothesis that for Global Neurosurgery to reach the childhood population effectively the ISPN had a role in collaboration with other agencies. After the session a number of us met to discuss what steps could be taken to advance Global Neurosurgical care with specific reference to children. During these discussions, it became apparent that little was truly known about the global neurosurgical workforce for children, the level of expertise and facilities available and the plans in place to correct any deficiencies. In conjunction with Prof James Johnston from the University of Alabama at Birmingham and Dr. Michael Dewan from Vanderbilt University in Nashville, we decided to take it upon ourselves to develop two initiatives; first, to carry out a survey in an attempt to get an idea of the current status of paediatric neurosurgical care and its delivery and second, to see if we could develop a platform for establishing collaborative partnerships. International survey This was created by Michael Dewan and collated using RedCap and distributed to members of the ISPN, GICS, IndSPN, AASPN and other organisations during July and August of 2017. A full description of the outcome of this survey will be given by Michael Dewan in the following presentation and is to be submitted for publication but in brief we received 512 responses from 78 countries. Of the 512 responses 405 were from neurosurgeons of whom 338 described themselves as having a major commitment to paediatric neurosurgery. There were responses from 107 non neurosurgeons and the questionnaire explored issues of training, manpower and resources. Although the survey was not exhaustive and clearly has limitations, it has given a vignette of the huge disparity of manpower, facilities and skills and does not paint a good picture for the future. From this survey, a number of key issues have been raised. Matching website In the survey that we carried out, we asked respondents if they would be interested in international collaboration and over 80% said that they would. This has led to the creation of InterSurgeon of which I will speak later. Take home message The United States is resource and personnel rich but despite this there are those that are disadvantaged and universal healthcare is not present to all strata of society. There is an excellent record of humanitarian work being carried out by paediatric neurosurgeons and their adult colleagues but there is a need for collaboration and spread of experience and information.
This was created by Michael Dewan and collated using RedCap and distributed to members of the ISPN, GICS, IndSPN, AASPN and other organisations during July and August of 2017. A full description of the outcome of this survey will be given by Michael Dewan in the following presentation and is to be submitted for publication but in brief we received 512 responses from 78 countries. Of the 512 responses 405 were from neurosurgeons of whom 338 described themselves as having a major commitment to paediatric neurosurgery. There were responses from 107 non neurosurgeons and the questionnaire explored issues of training, manpower and resources. Although the survey was not exhaustive and clearly has limitations, it has given a vignette of the huge disparity of manpower, facilities and skills and does not paint a good picture for the future. From this survey, a number of key issues have been raised.
In the survey that we carried out, we asked respondents if they would be interested in international collaboration and over 80% said that they would. This has led to the creation of InterSurgeon of which I will speak later.
The United States is resource and personnel rich but despite this there are those that are disadvantaged and universal healthcare is not present to all strata of society. There is an excellent record of humanitarian work being carried out by paediatric neurosurgeons and their adult colleagues but there is a need for collaboration and spread of experience and information.
In June 2017 I was invited by the ISPN President Elect Prof Graham Fieggen, to take part in the 2nd African Paediatric Neurosurgical Workshop. This is an excellent course run for the benefit of all African countries and is strongly supported by both the ISPN and the ESPN. Themes for the meeting included epilepsy and neuromonitoring and the invited faculty brought with them unparalleled experience. It was a particular pleasure for me to take part in the course as my paternal grandfather was born and died in Pietermaritzburg, after my great grandfather settled there in the latter part of the nineteenth century. I was, therefore, able to visit family graves and houses once occupied by my father’s family, as well as the regimental headquarters of the Natal Carabineers, a regiment to which my great grandfather had been a bandsman. Pietermaritzburg has another peculiar place in history as it was on the platform at Pietermaritzburg station in June 1893 that Mahatma Ghandi, a young lawyer, was ejected from a first class carriage, an incident that started his political career. South Africa (RSA) has taken a prominent place in Neurosurgical education in the African Continent particularly the University of Cape Town and it has had a previous ISPN President Jonathan Peter as well as hosted a remarkably successful 36th annual ISPN meeting in Cape Town in 2008. RSA has a population of 56.9 million of whom 18.5 million or 34% are under 18 years and of whom 64% live in poverty. Life Expectancy is only 50.3 years and of the GDP per capita of $5273.60, 8.8% is spent on healthcare. There are approximately 180 neurosurgeons of whom only 4 are in full time paediatric neurosurgical practice. Despite socioeconomic hardships and political turmoil, RSA is playing a principal role in education in Sub Saharan Africa (SSA) where the issues are immense. The population of SSA already exceeds 1000 million and of this 50% are under 18 years. The birth rate is continuing to increase and it is anticipated that by 2050 one in three of the world’s children will be in SSA. To serve this huge childhood population there are less than 15 neurosurgeons trained specifically in paediatrics. The World Federation of Neurosurgical Societies created a training centre in Rabat which has trained a number of surgeons in SSA and this work has led to training centres and programmes developing in many other countries as well. The College of Surgeons of East, Central and Southern Africa (COSECSA) is taking a lead in running courses on basic surgical skills, including neurosurgery and in neurotrauma. Take home message Africa and particularly SSA has a crisis of delivery of surgical care which can only be resolved by developing innovative solutions to creating infrastructure and by workforce optimisation in neurosurgery.
Africa and particularly SSA has a crisis of delivery of surgical care which can only be resolved by developing innovative solutions to creating infrastructure and by workforce optimisation in neurosurgery.
In July 2017, Prof Roberto Jaimovich invited me to be the first visiting professor since he took over from Prof Graciela Zuccaro as head of Neurosurgery at Hospital de Pediatria Garrahan. Despite its recent economic troubles, Argentina has a long-standing commitment to paediatric neurosurgery beginning with Raul Carrea, a founder member of the ISPN and President from 1978 to 1979. Argentina has a population of 44.4 million of whom 13.3 million or 23% are under 18 years. Life expectancy 77.8 years, GDP per capita is $12,449.22 of which 8% is spent on healthcare. The unusual thing about Argentina is that there are approximately 1000 trained neurosurgeons and of these 70 are full time paediatric neurosurgeons. I have visited Buenos Aires and Hospital Garrahan on a number of occasions and through the epilepsy surgery team of Dr. Hugo Pomata had the pleasure of receiving Marcello Bartolucci in London for a period of six months, since which time he has made great progress in epilepsy surgery in Buenos Aires. Garrahan has an extremely active training programme and there are two features that make it unusual. The first is that the residency is in paediatric neurosurgery alone with only a few months spent with the adult neurosurgical programme, which means that once trained, the residents are committed to a future in paediatric neurosurgery alone. Secondly, the programme accepts two trainees from Colombia on a regular basis and so is creating strong links within South America and improving the standard of neurosurgical care for children outside Argentina. As well as giving lectures and attending the operating sessions, I was able to spend time with the residents who were very well motivated and had clear ideas about possible improvements to the training programme which I was able to feed back to Prof Jaimovich. The unit at Garrahan is young and enthusiastic and has a significant role to play in education in South America and in collaborations through the Federacion Latinoamericana de Sociedades de Neurochirugia (FLANC) and PediFLANC its paediatric arm. Take home message Paediatric neurosurgery in Garrahan is taught in isolation from adult neurosurgery and the teaching programme welcomes trainees from outside Argentina.
Paediatric neurosurgery in Garrahan is taught in isolation from adult neurosurgery and the teaching programme welcomes trainees from outside Argentina.
In August, an ISPN teaching course was organised in Santarem, a city in the Amazon region of Brazil. It was a wonderful opportunity to visit an area of Brazil to which we had not previously travelled and to fulfil an ambition to visit the opera house in Manaus. One of my favourite movies is Fitzcarraldo, the extraordinary film by Werner Herzog which tells of Brian Sweeney Fitzgerald (Klaus Kinsky) who wishes to build an opera house in Iquitos, Peru. The film begins with Fitzgerald and Molly (Claudia Cardinale) attending the opera in Manaus to hear Enrico Caruso sing. The opera house was built in the Belle Epoque at the end of the nineteenth century at the height of the rubber boom and many of the materials to build it were brought from Europe. It was said that the intention was to attract Caruso to sing at the opening of the Opera House in January 1897 but whether he did actually sing there now seems to be a matter for debate. However, we were able to attend a concert in this magnificent pink building and another dream was realised. From Manaus we travelled down to Amazon by boat to Santarem where the course was being held, hosted by Dr. Erik Jennings Simoes. The venue was the splendid auditorium created by Erik’s brother near the town of Alter do Chao on the Tapajos river. The scenery was fantastic as too was the hospitality and the quality of the course outstanding. Dr. Simoes has a long history of working with the indigenous tribes in the interior of the Amazon region and has for many years been working with the Zoe tribe who he visits on a regular basis to provide medical care. We had the unique experience of meeting a Zoe tribesman during the conference as one of the tribes’ foremost hunters fell from a tree and had to be brought to Santarem to be checked out. The tribe are identifiable by their long lip plugs and only came in contact with the outside world as recently as 1982. Brazil has a population of 209.5 million of which 62.85 or 30% are under the age of 18 years. Life expectancy is 73.8 years and the GDP per capita is $8649.95, of which 8.3% is spent on healthcare. Brazil has a well-developed infrastructure with 2875 neurosurgeons of whom 60 are primarily paediatric neurosurgeons. Over the last two decades there has been significant growth in the Brazilian economy, which has brought prosperity but in recent years this growth has slowed and the country has become embroiled in a number of political and financial scandals, centred around bribery and corruption. Although Brazil has enjoyed economic success and has a well-developed infrastructure there is a concentration of population and resources to the major conurbations which means that there are many areas which are geographically remote and where neurosurgical services for children are poor. This was demonstrated by our hosts in Santarem who cover a huge geographical area with many of the remote areas inaccessible except by boat and therefore treatment delays inevitable. Take home message Despite its recent economic success many areas of Brazil are poorly served with paediatric neurosurgery. There needs to be some incentive to working in more isolated and remote areas and develop infrastructure to support doctors in these regions. This geographical inaccessibility has parallels with Nepal although economically they are very dissimilar. Nonetheless, like Nepal, one solution may come through training programmes and the use of digital technology both for delivery of clinical care and for education.
Despite its recent economic success many areas of Brazil are poorly served with paediatric neurosurgery. There needs to be some incentive to working in more isolated and remote areas and develop infrastructure to support doctors in these regions. This geographical inaccessibility has parallels with Nepal although economically they are very dissimilar. Nonetheless, like Nepal, one solution may come through training programmes and the use of digital technology both for delivery of clinical care and for education.
The end of my presidential year is marked by the 45th Annual ISPN meeting in Denver at which this speech is delivered. Denver is in fact another city with which I have family ties, although this time on my maternal side. My maternal great great grandfather was a mining expert from Cornwall in England who was invited to go out to Denver to improve the yield from the extraction of gold from ore, as after the initial seams of gold had been worked, the deeper veins of gold were mixed with copper and iron pyrites, making extraction much more costly. After an initial visit to assess the situation Richard Pearce was offered the job of setting up new furnaces firstly at Empire and then Black Hawk. He then lived with his family for many years in the Denver area, becoming British Vice Consul and receiving an honorary PhD from Colombia College New York. My Grandfather was born in Denver but as the mining business declined in Denver, his family returned to Cornwall when he was 7 and so apart from a brief period at the Massachusetts Institute of Technology following World War I in 1919–1920, he spent the rest of his life in England. During 2017 and following the AANS meeting, Jim Johnston and I worked on the idea for a matching site for international collaboration. Whilst listening to the presentations at the AANS meeting in Los Angeles earlier in the year, I was struck by the fact that although there was excellent work being done in many parts of the world it was uncoordinated. A study carried out by the International Education Subcommittee of the AANS/CNS joint paediatric section and published in J Neurosurgery Pediatrics showed that a 29 separate agencies were involved in different forms of collaboration. Of 116 respondents 61% had carried out or taught neurosurgery in a developing country, with 49% travelling annually to do so. Seventy-seven percent expressed the wish to work elsewhere but 43% stated that inability to identify a collaborative partner prevented them from doing so. They concluded “Creation and curation of an online database of ongoing projects to facilitate coordination and involvement may be beneficial”. However, a simple mapping process has considerable difficulties associated with it and Dr. Marilyn Butler of Global Pediatric Surgery Network was able to share with us the difficulties she has encountered over the last ten years in establishing a mapping project for general paediatric surgery. We therefore decided to develop in independent web-based platform to match centres offering services to those in need of assistance. We went through a branding process for the project and then developed the website along the lines of a dating site. With support from the ISPN, the University of Alabama at Birmingham and private donations the InterSurgeon project has been born and is continuing to be developed, in the hope and expectation that more partnerships can be created within paediatric neurosurgery and thereby children’s lives improved. InterSurgeon will be free to all and our intention is to register as a charity in the UK. Models of partnership & fellowship In the session following this presidential address, we will hear from several people who have extensive experience of long-term international collaborative partnerships and each has evolved a different model appropriate for their circumstances. Prof Michel Zerah has worked for many years in Vietnam, teaching paediatric neurosurgery to adult neurosurgeons. Prof Ezio di Rocco has been teaching third ventriculostomy to general surgeons whilst Jim Johnston has applied modern technology in the form of a tablet to provide remote surgical supervision, advice and teaching in Vietnam. Dr. Maya Bhattachan from Kathmandu works alone in her institution and wishes to have a brief period of intensive exposure to surgical procedures in a South–South partnership in order to observe specific techniques not easily understood from the textbooks. She will describe how her neurosurgical practice consists primarily of traumatic brain injury as a result of poor levels of child safety and that the majority of tumours present very late and how there is very poor follow-up. Patrick Kamalo from Malawi and Gyang Bot from Nigeria both have experience of International fellowships but explain the difficulties of applying what they have learnt to their local, resource-poor situation. I hope that these presentations and the discussions that follow will stimulate us all to consider what part we can play in Global Neurosurgery for Children. The key issues are to expand the neurosurgical workforce, whilst at the same time optimising the workforce already in place; collaborate with others to develop better infrastructures and create training schemes that deliver care as close to the point of need as possible.
In the session following this presidential address, we will hear from several people who have extensive experience of long-term international collaborative partnerships and each has evolved a different model appropriate for their circumstances. Prof Michel Zerah has worked for many years in Vietnam, teaching paediatric neurosurgery to adult neurosurgeons. Prof Ezio di Rocco has been teaching third ventriculostomy to general surgeons whilst Jim Johnston has applied modern technology in the form of a tablet to provide remote surgical supervision, advice and teaching in Vietnam. Dr. Maya Bhattachan from Kathmandu works alone in her institution and wishes to have a brief period of intensive exposure to surgical procedures in a South–South partnership in order to observe specific techniques not easily understood from the textbooks. She will describe how her neurosurgical practice consists primarily of traumatic brain injury as a result of poor levels of child safety and that the majority of tumours present very late and how there is very poor follow-up. Patrick Kamalo from Malawi and Gyang Bot from Nigeria both have experience of International fellowships but explain the difficulties of applying what they have learnt to their local, resource-poor situation. I hope that these presentations and the discussions that follow will stimulate us all to consider what part we can play in Global Neurosurgery for Children. The key issues are to expand the neurosurgical workforce, whilst at the same time optimising the workforce already in place; collaborate with others to develop better infrastructures and create training schemes that deliver care as close to the point of need as possible.
One’s destination is never a place, but a new way of seeing things—Henry Miller I have not included in this address every country or place that I visited during my presidential year, despite the fact that the process of travel and every destination visited has given me an idea or firmed up my resolve on ideas previously formulated. The ISPN already contributes to education in our speciality by organising teaching courses in many parts of the world and these have now extended to include nurses. The ISPN Guide is an electronic text accessible without charge through the ISPN website. We recognise that the ISPN annual meeting is a platform for exchange of information and meeting others who may be working in similar environments. For this reason, we give financial support to attend the annual scientific meetings through scholarships and reduced fees for LMIC attendees and offer candidate membership of the society with access to Child’s Nervous system free of charge. We offer finance for visiting fellowships and observerships both for individuals and multidisciplinary teams. Finally, there are reduced membership fees offered for applicants from LMICs. The ISPN has also provided funding for the InterSurgeon project which offers the potential to create collaborative partnerships.
There is no doubt that the ISPN does a lot to further Global Neurosurgery for children but we can and should do more. I believe that we need to engage further with other organisations involved in Global Surgery and in particular continue our association with GICS. This will offer to us the opportunity of bringing paediatric neurosurgery education to paediatric surgeons who represent the greatest opportunity for improving delivery of neurosurgical care in many parts of the world for conditions such as TBI, hydrocephalus and myelomeningocele. We must overcome the professional isolation that many neurosurgeons seem to think is an obligatory part of our speciality and recognise that if we increase the amount of task sharing or workforce optimisation that we engage in, it is likely that we will introduce our speciality to a wider audience and in so doing strengthen rather than weaken our speciality. This clearly needs to be in the context of training more neurosurgeons appropriately for their local situation but unless we take advantage of the skilled workforce present in the form of general paediatric surgeons, many lives will be lost. In some countries, it will take several generations before the numbers of neurosurgeons will be sufficient to provide universal health coverage. GICS also gives us the opportunity to get paediatric neurosurgery recognised as part of the National Surgical Plans of LICs bringing to the attention of the politicians what our speciality can offer and the potential savings of life and money that can ensue from neurosurgery for children being a part of a well-planned national surgical strategy. We must be involved in the implementation of WHA resolution of 68.15. We have the opportunity through the WFNS to improve instrumentation in many parts of the world and we should advise on equipment suitable for paediatric use. We need to liaise more closely with the WFNS over the paediatric content of their educational courses and indeed integrate them into the work of the ISPN Educational Committee. The WFNS is also recognised globally as our international representative professional body and we should seek a means to be more fairly represented by them as currently the ISPN is not itself a member of the WFNS. Through the WFNS, we should lobby other organisations such as the WHO on matters relating to the neurosurgical wellbeing of children such as dietary supplementation with folate, Vitamin K injection at birth and head protection and restraints during sport and road travel. Finally, InterSurgeon offers a platform for international collaboration and as the work on this project progresses every ISPN member who is interested in the matter of Global Neurosurgery for Children should enrol and create offers or requests so that International Partnerships can be created and the plight of children with neurosurgically treatable conditions improved.
In conclusion I believe that the ISPN has a unique role in both education and in political lobbying and that by working in concert with others we have the opportunity of having significant and long lasting impact on the prevention and surgical management of lesions of the central nervous system in children and in so doing, improve the length and quality of their lives. William Harkness MB ChB FRCS ISPN President Cornwall, England With grateful thanks to and warm appreciation of In memoriam In my text and in the list of thanks above, I have acknowledged my friend and colleague Dr. Sanjiv Bhatia. In May of 2018, Sanjiv died under tragic circumstances and the whole of the paediatric neurosurgical community is still reeling from this. Sanjiv was a masterful surgeon, a great thinker, an excellent educator but most of all a wonderfully warm and caring individual who loved his patients and doted on his family. He had worked in Haiti with the relief programme of his colleague John Ragheb and was one of the great supporters of Global Neurosurgery for Children and joined us for the discussions at the AANS meeting in Los Angeles where our survey and InterSurgeon were born. We have lost a great friend and wonderful colleague. Addendum Since the ISPN meeting in Denver the InterSurgeon website has been through testing and it was launched on March 15th 2018. In the first three months, we have had over 160 members from 52 countries join the website. We have been asked by the WFNS to expand the membership of InterSurgeon to adult neurosurgery and are actively seeking funds to do this. We also hope to expand into Paediatric Surgery and then Urology later in 2018. Although now retired from clinical practice, I am honoured to be appointed as the ISPN Ambassador for Global Neurosurgery and have been nominated as a member of the WFNS-WHO liaison committee and recently attended the 71st World Health Assembly meeting in Geneva. I attended the satellite symposium on Global Surgical, Obstetric and Anaesthetic care, a technical meeting hosted by the WHO Essential and Emergency Surgical Care Programme. In addition, I have continued to travel visiting Peru in December 2017 to carry out a review of the paediatric neurosurgical service at Instituto Nacional de Salud del Nino, Brena, Lima. I took part in the first ISPN teaching course to be held in Bangladesh in January 2018 and the third GICS meeting which was held in Vellore, India in January 2018, during which I gave a very brief neurosurgical skills presentation to general paediatric surgeons with a hands-on demonstration using a Hudson brace and Gigli saw.
In my text and in the list of thanks above, I have acknowledged my friend and colleague Dr. Sanjiv Bhatia. In May of 2018, Sanjiv died under tragic circumstances and the whole of the paediatric neurosurgical community is still reeling from this. Sanjiv was a masterful surgeon, a great thinker, an excellent educator but most of all a wonderfully warm and caring individual who loved his patients and doted on his family. He had worked in Haiti with the relief programme of his colleague John Ragheb and was one of the great supporters of Global Neurosurgery for Children and joined us for the discussions at the AANS meeting in Los Angeles where our survey and InterSurgeon were born. We have lost a great friend and wonderful colleague.
Since the ISPN meeting in Denver the InterSurgeon website has been through testing and it was launched on March 15th 2018. In the first three months, we have had over 160 members from 52 countries join the website. We have been asked by the WFNS to expand the membership of InterSurgeon to adult neurosurgery and are actively seeking funds to do this. We also hope to expand into Paediatric Surgery and then Urology later in 2018. Although now retired from clinical practice, I am honoured to be appointed as the ISPN Ambassador for Global Neurosurgery and have been nominated as a member of the WFNS-WHO liaison committee and recently attended the 71st World Health Assembly meeting in Geneva. I attended the satellite symposium on Global Surgical, Obstetric and Anaesthetic care, a technical meeting hosted by the WHO Essential and Emergency Surgical Care Programme. In addition, I have continued to travel visiting Peru in December 2017 to carry out a review of the paediatric neurosurgical service at Instituto Nacional de Salud del Nino, Brena, Lima. I took part in the first ISPN teaching course to be held in Bangladesh in January 2018 and the third GICS meeting which was held in Vellore, India in January 2018, during which I gave a very brief neurosurgical skills presentation to general paediatric surgeons with a hands-on demonstration using a Hudson brace and Gigli saw.
The material for this presidential address was derived from a large number of medical and non- medical sources. These included the WHO and World Bank website data for population and healthcare statistics. Essential reading on the subject of Global Surgery include WHO Programme on Neurological Diseases and Neuroscience, World Health Organization. (2004). Atlas : of a collaborative study of the World Health Organization and the World Federation of Neurology. Geneva : World Health Organization http://www.who.int/iris/handle/10665/43075 Global Surgery 2030: evidence and solutions for achieving health, welfare, and economic development Meara, John G et al. Lancet , Volume 386 , Issue 9993 , 569–624 Disease control priorities 3rd edition. Volume 1: Essential surgery. Edited by Haile Debas, Charles Mock, Atul Gawande, Dean T Jamison, Margaret Kruk, and Peter Donkor, with a foreword by Paul Farmer Guidelines for essential trauma care. Mock C, Lormand JD, Goosen J, Joshipura M, Peden M. Geneva, World Health Organization 2004 Essential surgery: key messages from disease control priorities, 3 Charles N Mock, Peter Donkor, Atul Gawande, Dean T Jamison, Margaret E Kruk, Haile T Debas, for the DCP3 Essential Surgery Author Group Presidential address 2015 . C Deopujari. Childs Nerv Syst (2016) 32:1761–1767 Global surgery: defining an emerging global health field. AJ Dare, CE Grimes, R Gillies, SLM Greenberg, L Hagander, JG Meara, AJM Leather Lancet 2014; 384: 2245–47 Effect of geopolitical forces on neurosurgical training in sub-Saharan Africa. R Dempsey et al. World Neurosurg 101:196–202, 2017 Global Neurosurgery: the unmet need. Park KB, Johnson WD, Dempsey RJ. World Neurosurg. 2016;88:32–35 Towards a common definition of global health J P Koplan, TC Bond, MH Merson, K Srinath Reddy, MH Rodriguez, NK Sewankambo, JN Wasserheit, for the Consortium of Universities for Global Health Executive Board* Lancet 2009; 373: 1993–95
ESM 1 (JPG 241 kb)
|
Systematic review and meta-analysis determining the effect of implemented COVID-19 guidelines on surgical oncology volumes and clinical outcomes | 24951ee7-a0d9-46be-aa18-f8e0b20cf8ae | 9529677 | Internal Medicine[mh] | Introduction During the pandemic Coronavirus disease-19 (COVID-19), the non-COVID-19 healthcare system was adjusted through newly developed measures, including the identification of surgical prioritization in the oncological field to deliver adequate Intensive Care Unit (ICU) capacity and available healthcare providers [ , , , ]. Due to the sudden emergence of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) and its rapid spread, the above-mentioned measures were developed with limited knowledge of SARS-CoV-2's viral behavior . In addition, in the Netherlands, several guidelines were developed based on expert advice and limited knowledge of COVID-19, including in the field of surgical oncology . The Dutch oncology-oriented guideline consisted of surgical prioritization recommendations. Identifying levels of surgical priority is necessary to determine if procedures can be postponed, balancing the risk between viral exposure and disease progression. The consequences of these implemented measures were noticeable in surgical and non-surgical oncological practice . Currently, various vaccines are available to reduce the risk of mortality or severe illness caused by COVID-19 [ , , ]. However, as long as COVID-19 continues to spread, there is a risk that new variants will emerge. In addition to the mutating nature of viruses, several factors contribute to an increased risk of developing new variants, including people's reluctance to receive COVID-19 vaccinations and limited or no access to vaccinations [ , , ]. The aftermath of the COVID-19 pandemic may be extensive, and future pandemics are plausible, resulting in additional pressure on healthcare, and a subsequent scale reduction in surgical care may be insurmountable. Therefore, it is essential to determine whether surgical oncology decisions during the COVID-19 pandemic have led to disease progression and associated additional care. A revision of surgical oncology measures may be possible, if necessary, by evaluating this clinical surgical data. Therefore, this systematic review and meta-analysis aims to provide insight into the number and clinical outcomes of the performed surgical oncology procedures during the COVID-19 pandemic.
Materials and methods 2.1 Search strategy This systematic review and meta-analysis was performed according to the guidelines of the PRISMA Checklist for meta-analysis . A systematic literature search was performed in the PubMed and Embase databases, including all articles published before March 21, 2022. The search strategy contained a combination of keywords (and their synonyms), including “COVID-19”, “SARS-CoV-2”, and “surgical”. The complete search strategy is available in the supplementary data ( ). 2.2 Study selection After removing duplicates, four reviewers (EB, OB, EH, and MF) independently screened articles by title and abstract for eligibility. The four reviewers discussed discordant judgments until consensus was reached. All articles meeting the following inclusion criteria were selected for full-article review: surgical procedures involving oncological surgery which provided data on oncological outcomes and/or the number of performed surgical procedures. Studies were excluded from the systematic review for the following reasons: articles including recommendations only based on opinions and guidelines; articles without comparison to pre-COVID-19 cohort, non-human biological sample usage; non-English language articles, case reports, case series, editorials, commentaries, short communications, letters, review articles, conference abstracts; no full text available. The reviewers (EB, MF) reviewed the retrieved full-text articles. Agreement for eligibility was obtained for all articles. 2.3 Data extraction and definitions The following data were extracted from each eligible study: first author's surname, publication year, type of malignancy, study period (pre-)pandemic cohort, number of performed surgical procedures, waiting time in days between operation-indication and surgical procedure, if possible. The influence of the COVID-19 pandemic on performed surgical oncology procedures was evaluated by comparing the total number of performed pre-pandemic surgical procedures to the total number of performed pandemic surgical procedures. To compare as reliably as possible between pre-COVID-19 and COVID-19 groups, most studies cover the same pre-COVID-19 and COVID-19 study period or consist of the same number of days. The author of the included study determined the timeframe of the (pre-)pandemic cohort. To compare the studies as reliable as possible, studies were only included if the COVID-19 cohort underwent a surgical procedure during the first wave of the pandemic. Of the included studies, data of the most commonly shared clinical outcomes were determined. These clinical outcomes included the pathological T- and N-stages of the TNM classification and the complication rate . Pathological T-stage cut-offs were ≥T2 and ≥T3 to provide inside into short-term disease progression. In addition, for the pathological N-stage, ≥N1 was used as the cut-off for evaluating the difference in clinical outcomes. Moreover, the Clavien-Dindo classification was used to classify the severity of reported major postoperative complications . For this meta-analysis, major postoperative complications Clavien-Dindo classification ≥3 was used as the cut-off for evaluating the clinical outcomes. 2.4 Bias assessment The risk of bias for each eligible study was evaluated by two reviewers (EB, MF) using the ROBINS-I Tool . The tool consists of seven domains; confounding, selection of participants, classification of interventions, deviations from intended interventions, missing data, measurement of outcomes, and selection of the reported result. Each domain was rated on three levels of bias: low risk, intermediate/unclear risk, or high risk of bias. The two authors discussed discordant judgments until consensus was reached. The summary of the risk of bias is shown in the supplementary data ( ). The full risk of bias assessment is displayed in the supplementary data as well ( ). 2.5 Statistical analysis Descriptive statistics were used to describe patient characteristics. Meta-analysis was performed to visualize the number of performed surgical oncology procedures before and during the COVID-19 pandemic using the ggplot2 package in R. The effect of heterogeneity was quantified using I 2 , where a p-value < 0.05 indicated significant heterogeneity across the studies. In addition, a random-effects model was used to assess pooled oncological outcomes. The odds ratio (OR) was estimated with its variance and 95% confidence interval (CI). Statistical significance was defined as a p-value <0.05. Statistical analyses were carried out using the meta package in the R statistical software (version 4.0.2).
Search strategy This systematic review and meta-analysis was performed according to the guidelines of the PRISMA Checklist for meta-analysis . A systematic literature search was performed in the PubMed and Embase databases, including all articles published before March 21, 2022. The search strategy contained a combination of keywords (and their synonyms), including “COVID-19”, “SARS-CoV-2”, and “surgical”. The complete search strategy is available in the supplementary data ( ).
Study selection After removing duplicates, four reviewers (EB, OB, EH, and MF) independently screened articles by title and abstract for eligibility. The four reviewers discussed discordant judgments until consensus was reached. All articles meeting the following inclusion criteria were selected for full-article review: surgical procedures involving oncological surgery which provided data on oncological outcomes and/or the number of performed surgical procedures. Studies were excluded from the systematic review for the following reasons: articles including recommendations only based on opinions and guidelines; articles without comparison to pre-COVID-19 cohort, non-human biological sample usage; non-English language articles, case reports, case series, editorials, commentaries, short communications, letters, review articles, conference abstracts; no full text available. The reviewers (EB, MF) reviewed the retrieved full-text articles. Agreement for eligibility was obtained for all articles.
Data extraction and definitions The following data were extracted from each eligible study: first author's surname, publication year, type of malignancy, study period (pre-)pandemic cohort, number of performed surgical procedures, waiting time in days between operation-indication and surgical procedure, if possible. The influence of the COVID-19 pandemic on performed surgical oncology procedures was evaluated by comparing the total number of performed pre-pandemic surgical procedures to the total number of performed pandemic surgical procedures. To compare as reliably as possible between pre-COVID-19 and COVID-19 groups, most studies cover the same pre-COVID-19 and COVID-19 study period or consist of the same number of days. The author of the included study determined the timeframe of the (pre-)pandemic cohort. To compare the studies as reliable as possible, studies were only included if the COVID-19 cohort underwent a surgical procedure during the first wave of the pandemic. Of the included studies, data of the most commonly shared clinical outcomes were determined. These clinical outcomes included the pathological T- and N-stages of the TNM classification and the complication rate . Pathological T-stage cut-offs were ≥T2 and ≥T3 to provide inside into short-term disease progression. In addition, for the pathological N-stage, ≥N1 was used as the cut-off for evaluating the difference in clinical outcomes. Moreover, the Clavien-Dindo classification was used to classify the severity of reported major postoperative complications . For this meta-analysis, major postoperative complications Clavien-Dindo classification ≥3 was used as the cut-off for evaluating the clinical outcomes.
Bias assessment The risk of bias for each eligible study was evaluated by two reviewers (EB, MF) using the ROBINS-I Tool . The tool consists of seven domains; confounding, selection of participants, classification of interventions, deviations from intended interventions, missing data, measurement of outcomes, and selection of the reported result. Each domain was rated on three levels of bias: low risk, intermediate/unclear risk, or high risk of bias. The two authors discussed discordant judgments until consensus was reached. The summary of the risk of bias is shown in the supplementary data ( ). The full risk of bias assessment is displayed in the supplementary data as well ( ).
Statistical analysis Descriptive statistics were used to describe patient characteristics. Meta-analysis was performed to visualize the number of performed surgical oncology procedures before and during the COVID-19 pandemic using the ggplot2 package in R. The effect of heterogeneity was quantified using I 2 , where a p-value < 0.05 indicated significant heterogeneity across the studies. In addition, a random-effects model was used to assess pooled oncological outcomes. The odds ratio (OR) was estimated with its variance and 95% confidence interval (CI). Statistical significance was defined as a p-value <0.05. Statistical analyses were carried out using the meta package in the R statistical software (version 4.0.2).
Results A total of 12,782 articles were identified after duplicate removal. Of these, 12,406 were excluded during the titles and abstract screening, 376 articles were screened in full text ( ). Overall, 24 studies were included, 6762 surgical oncology procedures were reviewed. summarizes the main characteristics of the included studies. Study publication dates ranged from 2020 to 2022, with most studies being published in 2020 and 2021. The eligible studies delivered data on variant oncological disciplines including central nervous system (CNS), thyroid, thoracic, breast, colorectal, hepatocellular, endocrine, genitourinary, prostate cancer, skin and soft tissue sarcomas [ , , , , , , , , , , , , , , , , , , , , , , , ]. Of these included studies, eight evaluated surgical procedures for breast cancer [ , , , , , , , ]. In addition, six studies described the waiting time between pathological examination or diagnosis of cancer and the date the surgical procedure was performed [ , , , , , ]. Of these studies, three described shorter waiting times compared to pre-pandemic practice, of 0.5, 3 and 14 days, respectively [ , , ]. The remaining three studies showed minimally prolonged waiting times compared to pre-pandemic practice, of 4.0, 2.7 and 0.4 days, respectively [ , , ]. In addition, all of these studies reported information regarding performed breast cancer procedures [ , , ]. All studies were classified as overall methodological sufficient quality according to ROBINS-I Tool. The more comprehensive risk assessment of all included studies is presented in .
Surgical oncology volumes The total number of performed surgical oncology procedures during the COVID-19 pandemic was 2867, compared to 3895 during pre-pandemic practice (total decrease 26.4%) ( ). Moreover, 614 oncological breast procedures were performed during the pandemic, compared to 612 before the pandemic (total increase 0.3%) ( and ).
Clinical oncological outcomes Five studies with a total of 2608 patients included data on pathological ≥ T2 staged tumors [ , , , , ]. No difference was identified in the proportion of ≥T2 in the pandemic group compared to the pre-pandemic group (OR 1.00; 95% CI 0.72–1.38, P = 0.989) ( A, ). Four studies describing 1986 patients included pathological ≥ T3 data [ , , , ]. No difference was observed in the number of ≥T3 tumors during the pandemic compared to pre-pandemic practice (OR 0.95; 95%CI 0.69–1.32, P = 0.778) ( B, ). Furthermore, four studies with a total of 1951 patients included data on a pathological ≥ N1 stage [ , , , ]. No difference in ≥N1 during the COVID-19 pandemic compared to the pre-pandemic group was observed. (OR 1.01; 95% CI 0.68–1.50, P = 0.964) ( C, ). In addition, five studies describing 1901 patients included the number of major postoperative complications Clavien-Dindo ≥3 during the pandemic compared to the pre-pandemic cohort [ , , , , ]. No significant difference in the number of major postoperative complications was identified (OR 1.55; 95% CI 0.87–2.74, P = 0.134) ( D, ).
Discussion The current meta-analysis analyzed the number of performed surgical procedures for oncological pathologies during the COVID-19 pandemic. In total, the number of performed surgical procedures for an oncological pathology decreased (2867 vs. 3895, −26.4%) during the pandemic compared to pre-pandemic practice. In addition, the number of performed surgical procedures for breast cancer remained stable during the pandemic (578 vs. 569, +1.6%). Furthermore, no difference was identified in the proportion of ≥T2, ≥T3, ≥N1 during the pandemic compared to pre-pandemic practice, with OR's 1.00, 0.95, and 1.01, respectively. Finally, the number of major postoperative complications (Clavien-Dindo ≥3) was slightly, however not significantly, higher during the pandemic (OR 1.55, P = 0.134) compared to pre-pandemic performance. During the COVID-19 pandemic, several guidelines have been established to triage the performance of (surgical oncology) procedures to determine within which time frame surgical procedures should occur. Different triage methods were used for the clinical implementation of non-COVID care, including the stratification of acute, semi-acute, and elective procedures, or by emergency-, urgent-, elective with the expectation of cure and elective with no predictive harmful outcome procedures or by low-, intermediate- or high acuity [ , , , ]. In addition, some guidelines specifically described deferrable- or prioritizing surgical oncology procedures . The common denominator in these guidelines was to provide the maximal care capacity for the COVID-19 patient with as little disease progression as possible in non-COVID-19 pathologies. It is essential to investigate whether these guidelines are implemented in daily surgical practice and if short-term clinical outcomes are reported. This enables to determine whether disease progression may occur during possible future changes in operating room capacities, for example, if new pandemics arise. This current systematic review and meta-analysis showed that the number of performed surgical oncology procedures declined (2867 vs. 3895, 26.4% total decrease) during the pandemic compared to pre-pandemic clinical practice. This is in line with the Dutch Integral Cancer Registration (IKNL), which showed a decrease in the number of performed surgical oncology procedures during the first pandemic wave in the Netherlands . In contrast to the overall number of performed surgical oncology procedures and the IKNL data, this meta-analysis showed a stable number of performed surgical breast cancer procedures during the pandemic compared to previous pre-pandemic volumes (614 vs. 612, 0.3% total increase). Therefore, this study's decreased number of performed surgical oncology procedures may not be attributed to breast cancer practice. It is possible that, in order to reduce the pressure on healthcare, the operating time freed up by postponed elective surgical procedures was more easily filled by breast cancer procedures, in which patients are discharged faster postoperatively than by complex oncological procedures requiring intensive care unit admission. Moreover, postponement in surgical oncology procedures may or may not lead to disease progression; however, this depends on multiple factors [ , , ]. IKNL has estimated that due to stable chemotherapy performances, catch-up in cancer diagnosis, and surgical procedures, enough (non-)surgical patients have received cancer treatment in the Netherlands . This systematic review and meta-analysis included six studies reporting the waiting time between histological- or cytological-examination or diagnosis of cancer and date of performed surgical procedure, or time between surgical consult and surgical procedure. Of these studies, three showed a minimally longer waiting time during the pandemic than before the pandemic (mean difference 2.4 days, range 0.4–4.0). The tumors are not expected to have grown clinically relevant in this short time . Additional data is necessary to inventory each hospital's waiting time since previous literature states that increased waiting time for oncological procedures may lead to a lower overall survival rate . Moreover, this meta-analysis showed no significantly increased number of patients presenting with pathological ≥ T2, ≥T3, ≥N1 tumors or major postoperative complications during the COVID-19 pandemic compared to pre-pandemic cohorts. These results may indicate that no disease progression occurred during the COVID-19 pandemic in the included oncological studies, a possible conclusion also seen in a recent Dutch COVID-19 study focusing on stage distribution of colorectal cancers . This may be explained by some solid cancers being years old when noticed and requiring a surgical procedure . However, caution is advised as calculations anticipate diagnostic delays due to the COVID-19 pandemic may increase the number of preventable cancer deaths . This systematic review and meta-analysis has some limitations. First, separating surgical oncology volumes by type of oncology discipline was only possible for breast cancer. In addition, the majority of the breast cancer studies included data from Italy. Therefore, extrapolating the number of performed surgical breast cancer procedures to other countries may be difficult. Further research is necessary to determine the net summary of the number of performed surgical procedures for each country to allow for a more realistic representation of the delayed healthcare. Second, the current meta-analysis is limited by the data's heterogeneity. The COVID-19 pandemic severity differed between countries and regions, leading to heterogenic approach of oncological guidelines. As a result, inevitable variation is observed in chosen pre-pandemic and pandemic phases, chronology and management between the included studies. Specifically, some studies determined the start date of their COVID-19 cohort before the official WHO declaration of the COVID-19 pandemic, which may be explained by the varying incidence of COVID-19 between countries and/or regions . Third, this study was unable to review whether the observed reduction in surgical volumes was related to the deferral of surgical procedures due to altered hospital approach or patient-driven avoidance of care. Finally, more research is essential to determine whether people have been treated on time to have well-founded information for possible future pandemics. In conclusion, this meta-analysis showed a decrease (−26.4%) in the number of performed surgical oncology procedures during the COVID-19 pandemic (3895 vs. 2867). In addition, the number of performed surgical breast cancer procedures remained stable (+0.3%). Moreover, reported short-term oncological staging and major postoperative complications showed no significantly increased disease progression compared to pre-pandemic practice. In the event future pandemics, the performed surgical oncology care during the first wave of the COVID-19 pandemic appears appropriate regarding short-term outcomes. Further research should determine long-term and country-specific clinical outcomes.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Not applicable.
Research data is available upon reasonable request.
E. de Bock: conception and design, data collection, analysis and interpretation, writing the article, critical revision of the article, E.S. Herman: conception and design, data collection, analysis and interpretation, writing the article, critical revision of the article, O.W. Bastian: conception and design, data collection, analysis and interpretation, writing the article, critical revision of the article, M.D. Filipe: conception and design, data collection, analysis and interpretation, writing the article, critical revision of the article, M.R. Vriens: conception and design, analysis and interpretation, writing the article, critical revision of the article, M.C. Richir: conception and design, analysis and interpretation, writing the article, critical revision of the article.
None.
|
Predictive value of the | 7ea26549-1fa3-420c-a268-5b7ceb2bc57c | 11911145 | Cytology[mh] | As life expectancy continues to increase, emergency departments are encountering a growing population of geriatric patients. This demographic shift necessitates adjustments in emergency services, and even hospital and health policies in a broader sense. Predicting mortality among geriatric patients presenting to the emergency department constitutes a complex issue. These patients often face an elevated risk of mortality due to multiple chronic illnesses, polypharmacy and reduced physical activity. To predict mortality, scoring systems, such as Acute Physiology and Chronic Health Evaluation III (APACHE III), Hospice in End‐Stage Liver Disease Prognostic score (HELP scale), Burden of Illness Score for Elderly Persons (BISEP score), Frail Elderly Subject: Evaluation and Follow up (Sujet Âgé Fragile: Évaluation et Suivi‐ SAFES) and Hospital‐Patient One‐Year Mortality Risk (HOMR score), have been developed. Nevertheless, the applicability of these factors to unselected geriatric patients requiring acute critical care is complicated due to variations in performance across population groups and their reliance on the patient's prior medical history, which is often difficult to access and of uncertain reliability in emergency care settings. Therefore, there is a need for a scale that incorporates more personalized factors; for example, nutritional, as well as acute conditions, such as anemia, to predict mortality among geriatric patients presenting to the emergency department. The Hemoglobin, Albumin, Lymphocyte and Platelet (HALP) score has garnered attention recently as a scoring system that provides information about patients' nutritional status, anemia status, and inflammatory processes. Introduced by Chen et al . in 2015, the HALP score is calculated as hemoglobin × albumin × lymphocytes / platelets, and has been frequently utilized in determining prognosis, particularly among patients with malignancies. , , , In addition, the HALP score has been shown to be valuable for predicting hospital mortality among patients with ST‐segment elevation myocardial infarction, and projecting the occurrence of recurrent strokes and mortality in those with acute ischemic stroke. , Considering that the HALP score consists of albumin levels reflecting nutritional and inflammatory status, hemoglobin levels showing anemia status, and lymphocyte and platelet levels showing inflammatory status, it seems to be appropriate for use in the emergency department for geriatric patients. The present study aimed to investigate the predictive value of the HALP score for mortality among geriatric patients presenting to the emergency department.
Study design and data collection The present retrospective study was carried out in the emergency department of a tertiary hospital. This study was approved by the Ataturk University Clinical Research Ethics Committee with decision number 4/34 and dated 6 July 2024. Patients aged ≥65 years who presented to the emergency department between 1 January 2018 and 1 January 2024 were included in the study. Patients with known malignancies (including hematological malignancies), those who had received chemotherapy or radiotherapy within the past year due to malignancy, those with diagnoses or suspected diagnoses of COVID‐19, those with Crimean–Congo hemorrhagic fever or immune thrombocytopenic purpura, those presenting due to trauma regardless of cause (e.g. falls, motor vehicle accidents, neglect, abuse and crush injuries) and those with incomplete data for any reason were excluded from the study. Data on patient age, sex, date of presentation to the emergency department, provisional diagnosis at presentation, emergency department outcome (hospitalization, discharge or death in the emergency department), diagnosis at the time of admission for hospitalized patients and clinical outcomes (discharge or mortality) were obtained from electronic medical records. Hemogram parameters, including hemoglobin, albumin, lymphocytes and platelets, measured at the time of emergency department presentation were also retrieved from these records. A total of 125 965 patients presenting to the emergency department of Ataturk University Research Hospital Emergency Medicine, Erzurum/Turkey were initially screened. Among these, 25 456 patients presented due to trauma‐related causes (e.g. falls, motor vehicle accidents, neglect, abuse or crush injuries) and were therefore excluded from the study. Further excluded from the study were 896 patients diagnosed with COVID‐19, who were originally admitted for reasons other than COVID‐19 during the pandemic, but received a COVID‐19 diagnosis after hospitalization; 2300 patients diagnosed with Crimean–Congo hemorrhagic fever; and 21 184 patients diagnosed with malignancy. Finally, 10 971 patients were excluded due to incomplete data in their electronic medical records. As a result, 62 262 patients who met the inclusion criteria were included in the sample (Fig. ). The data derived from electronic records included hemoglobin (g/dL), albumin (g/dL), lymphocyte count (×10 3 /μL) and platelet count (×10 3 /μL). These parameters were converted to appropriate units (g/L or ×10 3 /μL) for the calculation of the HALP score using the following formula: hemoglobin × albumin × lymphocyte count / platelet count. Statistical analysis Statistical analyses were carried out using SPSS version 25 (IBM, Armonk, NY, USA). The Kolmogorov–Smirnov test was carried out to assess the normality of the data distribution. Descriptive statistics are presented as frequencies ( n ) and percentages (%) for categorical variables, and as median and interquartile range (25%–75%) values for variables without a normal distribution. Comparisons between χ 2 ‐test and Fisher's exact test where appropriate. Group comparisons for non‐normally distributed variables were analyzed using the Mann–Whitney U ‐test. Spearman correlation analysis was used to investigate relationships between variables that did not show a normal distribution. Receiver operating characteristic analysis was carried out to evaluate the predictive power of the HALP score for in‐hospital mortality and discharge. The area under the receiver operating characteristic curve was calculated for albumin, lymphocyte count and the HALP score for the prediction of patient outcome. Youden's J index was used to determine optimal cut‐off values. Sensitivity and specificity were computed with 95% confidence intervals (CIs). Statistical significance was set at P < 0.05.
The present retrospective study was carried out in the emergency department of a tertiary hospital. This study was approved by the Ataturk University Clinical Research Ethics Committee with decision number 4/34 and dated 6 July 2024. Patients aged ≥65 years who presented to the emergency department between 1 January 2018 and 1 January 2024 were included in the study. Patients with known malignancies (including hematological malignancies), those who had received chemotherapy or radiotherapy within the past year due to malignancy, those with diagnoses or suspected diagnoses of COVID‐19, those with Crimean–Congo hemorrhagic fever or immune thrombocytopenic purpura, those presenting due to trauma regardless of cause (e.g. falls, motor vehicle accidents, neglect, abuse and crush injuries) and those with incomplete data for any reason were excluded from the study. Data on patient age, sex, date of presentation to the emergency department, provisional diagnosis at presentation, emergency department outcome (hospitalization, discharge or death in the emergency department), diagnosis at the time of admission for hospitalized patients and clinical outcomes (discharge or mortality) were obtained from electronic medical records. Hemogram parameters, including hemoglobin, albumin, lymphocytes and platelets, measured at the time of emergency department presentation were also retrieved from these records. A total of 125 965 patients presenting to the emergency department of Ataturk University Research Hospital Emergency Medicine, Erzurum/Turkey were initially screened. Among these, 25 456 patients presented due to trauma‐related causes (e.g. falls, motor vehicle accidents, neglect, abuse or crush injuries) and were therefore excluded from the study. Further excluded from the study were 896 patients diagnosed with COVID‐19, who were originally admitted for reasons other than COVID‐19 during the pandemic, but received a COVID‐19 diagnosis after hospitalization; 2300 patients diagnosed with Crimean–Congo hemorrhagic fever; and 21 184 patients diagnosed with malignancy. Finally, 10 971 patients were excluded due to incomplete data in their electronic medical records. As a result, 62 262 patients who met the inclusion criteria were included in the sample (Fig. ). The data derived from electronic records included hemoglobin (g/dL), albumin (g/dL), lymphocyte count (×10 3 /μL) and platelet count (×10 3 /μL). These parameters were converted to appropriate units (g/L or ×10 3 /μL) for the calculation of the HALP score using the following formula: hemoglobin × albumin × lymphocyte count / platelet count.
Statistical analyses were carried out using SPSS version 25 (IBM, Armonk, NY, USA). The Kolmogorov–Smirnov test was carried out to assess the normality of the data distribution. Descriptive statistics are presented as frequencies ( n ) and percentages (%) for categorical variables, and as median and interquartile range (25%–75%) values for variables without a normal distribution. Comparisons between χ 2 ‐test and Fisher's exact test where appropriate. Group comparisons for non‐normally distributed variables were analyzed using the Mann–Whitney U ‐test. Spearman correlation analysis was used to investigate relationships between variables that did not show a normal distribution. Receiver operating characteristic analysis was carried out to evaluate the predictive power of the HALP score for in‐hospital mortality and discharge. The area under the receiver operating characteristic curve was calculated for albumin, lymphocyte count and the HALP score for the prediction of patient outcome. Youden's J index was used to determine optimal cut‐off values. Sensitivity and specificity were computed with 95% confidence intervals (CIs). Statistical significance was set at P < 0.05.
The study included a total of 62 262 patients, comprising 32 410 men and 29 852 women. The mean age of the patients enrolled was 73 years. Among these patients, 3093 died in hospital, with a mean age of 77 years. The mean age of patients who experienced mortality was statistically significantly higher compared with those who were discharged ( P < 0.001). Furthermore, patients in the mortality group had significantly lower HALP scores compared with those who were discharged ( P < 0.001). Table summarizes the demographic characteristics of the patients, including age, sex, platelet count, hemoglobin level, albumin level, lymphocyte count and HALP scores, categorized by outcome (mortality or discharge). When evaluating the correlation between patient outcomes (mortality or discharge) and other variables, a negative correlation was observed between the HALP score and patient outcome. There was also a negative correlation between age and patient outcome. Both age and the HALP score showed statistically significant relationships with patient outcome ( P < 0.001 for both). The correlations of the patient outcome with age, sex, platelet count, lymphocyte count, albumin level, hemoglobin level and the HALP score assessed at the time of emergency department presentation are summarized in Table . On analyzing the utility of platelet count, lymphocyte count, albumin and hemoglobin levels measured at the time of emergency department presentation for predicting patient outcomes, it was observed that the albumin level at presentation was the most effective predictor of patient outcomes. After albumin, the HALP score, which involved the evaluation of all parameters simultaneously, emerged as the second most valuable predictor of patient outcomes based on receiver operating characteristic analysis (Fig. , Table ).
To the best of knowledge, this study represents the first investigation in the literature evaluating the ability of the HALP score to predict mortality among geriatric patients. We determined that the HALP score calculated at the time of the emergency department presentation was valuable in predicting mortality in this population. There is a lack of extensive research on the relationship between the cut‐off value of the HALP score in healthy individuals. However, in a cross‐sectional study involving 8245 healthy volunteers, Antar et al . reported an average HALP score of 49 for adults, which was inversely related to the number of chronic diseases present in the volunteers. The authors also noted that advancing age was associated with a decrease in the HALP score, reporting an average HALP score of 45.6 for healthy volunteers aged ≥65 years. Although all patients included in the present study were aged ≥65 years, our evaluation did not include an assessment of comorbidities among the enrolled patients. In our study, the average HALP score for the 62 262 included patients aged ≥65 years was 37. The lower HALP scores observed in our study might be associated with undiagnosed chronic conditions among our patients. Furthermore, Antar et al . focused on healthy adults, whereas the present study group consisted of patients presenting to the emergency department. The HALP score was notably lower among patients who died in the hospital compared with those who were discharged, which is attributable to the lower albumin and hemoglobin levels of this population at the time of emergency department presentation. The association between the HALP score and mortality stems from the parameters that constitute this score. The first of these parameters is hemogram. In the geriatric patient group, anemia has been associated with a range of issues, including cognitive decline and dementia, frailty, increased risk of falls, decreased functional capacity, depression, prolonged hospital stays, and early death. Therefore, low hemoglobin levels could contribute to mortality by causing tissue‐wide hypoxia. The second parameter that constitutes the HALP score is albumin, which is associated with nutritional status and is noteworthy for its negative acute phase reactant properties in inflammatory conditions. In response to the increased synthesis of positive acute phase proteins, the synthesis of negative acute phase proteins, such as albumin and transferrin, is suppressed. Therefore, low albumin levels are important in evaluating geriatric patients due to the presence of both malnutrition and increased inflammatory markers. In the present study, we evaluated geriatric patients who presented to the emergency department with active diseases; therefore, we observed lower albumin levels in those who had a fatal outcome associated with inflammatory conditions. Systemic inflammation is assumed to stimulate neutrophilia and lymphopenia. Ayrancı et al ., who evaluated geriatric patients presenting to the emergency department with active diseases, also found a low lymphocyte ratio in the group with mortality compared with the group without mortality. In the present study, consistent with the literature, the lymphocyte count was lower in patients who developed mortality. Another parameter constituting the HALP score is platelet count. Platelets are known for their hemostatic functions, as well as their role in immune response and inflammatory processes. They engage in complex interactions with various immune‐inflammatory cells, facilitating the aggregation of lymphocytes at the site of inflammation in damaged vascular areas, actively mediating host responses during bacterial infections and interacting with neutrophils during viral infections, thereby influencing responses that vary according to the nature and duration of the infection. Maintenance of these complex and diverse activation states is essential for vascular homeostasis and health regulation. Consequently, elevated platelet levels or markers of increased platelet activation have been associated with mortality in critical diseases. , However, some studies have found lower platelet levels to be more valuable in predicting mortality in critically ill patients. , In the present study, platelet levels were similarly found to be lower in patients with mortality. However, possibly due to the lymphocyte level decreasing more significantly than the platelet level, the reduction in the platelet level did not result in a substantial increase in the HALP score. A strong aspect of the present study is its large number of patients. However, it also had some limitations. First, there was variability between the initial diagnosis at the time of emergency department presentation and the diagnoses made during follow up. Second, HALP scores were not calculated separately for different diagnoses. Consequently, the primary diagnoses of the patients might have affected the HALP score. Third, the chronic comorbidities of the patients included in the study were not evaluated. Finally, the study was carried out in a single center with a retrospective design. The individual assessment of hemoglobin, albumin, lymphocyte and platelet levels can predict mortality in geriatric patients presenting to the emergency department. However, the HALP score, which incorporates all these parameters, appears to be a valuable tool in predicting mortality in this patient population. Geriatric patients initially presenting with a low HALP score should be considered at increased risk for mortality, and might benefit from hospitalization and further evaluation by a geriatric specialist rather than being discharged from the emergency department.
No financial support was received from any institution or organization for this study.
The authors declare no conflict of interest.
1. Study Design: FT. 2. Data Collection: FT, AG, ET. 3. Statistical Analysis: ET, AG, FT. 4. Data Interpretation: AG, FT, ET. 5. Manuscript Preparation: FT, ET, AG. 6. Literature Search: FT, AG, ET. 7. Funds Collection: FT.
This study was approved by the Atatürk University Clinical Research Ethics Committee with decision number 4/34 and dated 6 July 2024.
|
Kombination von simulationsbasiertem Lernen und Online-Learning in der Augenheilkunde | db3d2d93-ce20-4411-a8db-c2383e09e956 | 7809651 | Ophthalmology[mh] | Wie wichtig es für alle Studierende ist, Basisfertigkeiten in der Ophthalmologie zu erlernen, konnte eine Umfrage von 93 Studierenden zeigen ; 53 % dieser Gruppe sind später neben der Ophthalmologie in Fachbereichen wie Innere Medizin, Pädiatrie, Gynäkologie, Allgemeinmedizin, Neurologie oder Notfallmedizin tätig, in denen ein Screening des Auges eine essenzielle Fertigkeit darstellt. Die Ausbildung mithilfe von echtem Instrumentarium wird jedoch seit Jahren dadurch erschwert, dass zum Zeitpunkt des Praktikums nicht genügend Patienten mit verschiedenen Erkrankungsbildern zur Verfügung stehen und darüber hinaus Zeitdruck in den Ambulanzen besteht. Die typische Alternative – die Auszubildenden untersuchen sich gegenseitig – ist meist nicht kompatibel mit deren Tagesablauf (Pupillenerweiterung, Lese- und Fahruntüchtigkeit), zudem wird hierdurch systematisches Lernen von Pathologien und deren Behandlungsoptionen erschwert. Für eine systematischere und effektivere Lehre wurden bereits verschiedene Anstrengungen unternommen. So wurde neben dem klinischen Training von Lippa et al. der Fundoskopiesimulator CLEO (Clinical Learning Experience in Ophthalmoscopy) verwendet. CLEO ist ein anatomisch korrekter Simulator, bei dem ein Schaufensterpuppenkopf mit einem erweiterten und einem nicht erweiterten Auge verwendet wird. Die Netzhäute dieser beiden Modellaugen werden durch einen Diabetrachter mit dem Dia einer Fundusfotografie (35-mm-Weitwinkeloptik [60°]) mit bekannter Pathologie simuliert . Kelly et al. beschreiben ein Styropormodell, bei dem fotografische Aufnahmen echter Retinae auf dem Innenboden eines weißen Polyethylenzylinders (ähnlich eines Kleinbildfilmdöschens) angebracht sind. Eine Linse, die in die Öffnung des Polyethylenzylinders eingesetzt ist, reproduziert die optischen Abbildungsverhältnisse des echten Auges. Das Training verschiedener Erkrankungsbilder mithilfe dieser Anordnung führt zu einer signifikanten Leistungsverbesserung. Die Möglichkeit, typische Krankheitsbilder anhand von Fundusfotografien zu erlernen, wird mit den Simulatoren Eyesi Direct und Eyesi Indirect mit Virtual-Reality-Techniken weitergeführt und mithilfe der von uns entwickelten Online-Plattform EyesiNet um klassische text- und bildbasierte Lehrinhalte ergänzt. In dieser Kombination werden definierte Lerninhalte reproduzierbar und für alle gleichermaßen zur Verfügung gestellt, sodass – unabhängig von personellen Schwankungen – allen Auszubildenden die gleichen Möglichkeiten geboten werden. Der durch das standardisierte Training erzielte individuelle Lernerfolg wird mess- und vergleichbar. Ein sehr nützlicher Nebeneffekt des computergestützten Trainings ist, dass die Studierenden im Bedarfsfall ihren Lernprozess unabhängig von Lehrveranstaltungsorten und festgelegten Zeiten eigenverantwortlich weiterführen können. Der angehende Mediziner lernt, typische Krankheitsbilder auf Abruf kennenzulernen und diese ggf. später in Diensten und in Stresssituationen – unabhängig von der Fachdisziplin – wiederzuerkennen, richtig einzuschätzen und die Behandlung korrekt einzuleiten. Dies kann je nach Erkrankungsbild ausschlaggebend für den Erhalt der Sehkraft werden.
Die von uns verwendeten Simulatoren Eyesi Indirect und Eyesi Direct (Firma VRmagic, Mannheim, Deutschland, Softwareversion 1.8) bestehen aus der Nachbildung des jeweiligen Instruments (direkte bzw. indirekte Ophthalmoskopie mit binokularer Hutophthalmoskopie), einem Patientenmodell, einem Touchscreen-Monitor sowie einem PC (Abb. ). Im Okular des simulierten direkten Ophthalmoskops (Eyesi Direct) sieht der Untersuchende eine rein virtuelle Darstellung der beobachteten Strukturen, insbesondere des Augenhintergrunds. Im Unterschied dazu erfolgt bei Eyesi Indirect die Darstellung mittels virtueller und erweiterter Realität („virtual and augmented reality“), d. h. der Untersuchende sieht das reale Bild seiner Umgebung über ein Videosignal auf Displays im Hutophthalmoskop, allerdings wird anstelle der physisch vorhandenen Patientenmaske ein virtueller Patient in das Bild gemischt. Auf diese Weise kann der Untersuchende seine eigene Hand unter Ophthalmoskopsicht weiterhin kontrollieren und – für diese Untersuchungsmethode sehr wichtig – sich unter Sicht am Kopf des Patienten abstützen, um eine stabile Position für die Untersuchungslupe zu finden. In dieser Mischung aus echter und virtueller Situation entsteht eine realistische und dynamische 3‑D-Lernumgebung. Bei beiden Simulatoren werden die virtuellen Patienten auf täuschend echte Weise dargestellt. Passend zum dargestellten Erkrankungsbild können sie unterschiedlichen Alters und ethnischer Herkunft sein. Durch die korrekte Bedienung des Ophthalmoskops (Einstellung von Licht und Refraktionsausgleich, Abstand vom Patientenauge, Rotreflex, ggf. Positionierung und Orientierung der Ophthalmoskopierlupe) ist es möglich, ein realistisches Bild der Retina zu sehen . Die von der Software angezeigten Fallbeschreibungen wurden im Rahmen des Projekts an die Bedürfnisse der Studierenden angepasst und mit den theoretischen EyesiNet-Inhalten abgeglichen: Neben einer kurzen Darstellung der Anamnese des virtuellen Patienten, seines Sehvermögens und Augeninnendrucks sind es v. a. die Multiple-Choice-Fragen für Befundung und Diagnose, die Bezug auf die Krankheitsbilder nehmen, die im webbasierten Theorieteil beschrieben werden (EyesiNet, s. unten).
Der Hersteller der Simulatoren stellt bereits ein webbasierte Schulungsportal zur Verfügung (VRmNet, Version 8.0), bei dem die Teilnehmer nach dem Einloggen von einem beliebigen Computer oder Mobilgerät aus Zugang zu einem Orientierungskurs für die Simulatoren haben, ihre Trainingsdaten einsehen können und eine Bibliothek mit ihren am Eyesi Direct und Indirect gefundenen anatomischen und pathologischen Befunden im Laufe des Trainings anlegen können. Die medizinischen Inhalte dieser Online-Plattform wurden von uns im Rahmen des Lehrprojektes weiterentwickelt und so strukturiert, dass sie die Bedürfnisse der Studierenden und den Lernzielkatalog widerspiegeln (z. B. hypertensive Retinopathie, diabetische Retinopathie, Aderhauttumore) und auch für die Spezialisierung in anderen Fachgebieten relevant sind (Innere Medizin, Gynäkologie, Pädiatrie, Neurologie u. a.). Hierzu wurden in der von uns angepassten Plattform („EyesiNet“) 14 Fälle und deren Pathologien im Karteikartenformat nach Definition, Klassifikation, Epidemiologie, Risikofaktoren, Histopathologie, Symptomen und klinischen Zeichen, Diagnostik, Therapie und Prognose gegliedert und die jeweiligen Unterpunkte mit Bildern versehen (Abb. ). Das Besondere ist, dass die dargestellten Pathologien in EyesiNet anhand von Screenshots aus den Simulatoren erklärt werden, sodass während der praktischen Übungen an simulierten Patienten die Studierenden diejenigen Befunde wiederfinden, die sie sich zuvor im Netz angeschaut haben. Auf diese Weise sind der Wiedererkennungseffekt und die damit einhergehende Lernmotivation deutlich höher. Jeder Student erhält seinen eigenen EyesiNet- und Simulatorzugang. Mithilfe der Online-Inhalte kann das simulatorgestützte Training zu Hause per PC oder Smartphone vor- und nachbereitet sowie der individuelle Trainingsfortschritt überprüft werden.
Nach Erstellen der notwendigen Lerninhalte und deren Integration in die Online-Plattform wurde eine prospektive Studie durchgeführt, genehmigt durch die Ethikkommission des Fachbereichs Medizin der Goethe-Universität Frankfurt (Beschlussnummer E 205/19, Geschäftsnummer 19-327). Ziel der Studie war es, die Effizienz des Lehransatzes zu überprüfen. Dazu wurde die Bewertung der durchgeführten Fälle am Simulator mit der Lernzeit im EyesiNet korreliert, um zu prüfen, ob es eine Abhängigkeit der Lernerfolgskurve bei der Befundung von Krankheitsbildern von der Beschäftigungsdauer in EyesiNet gibt. Die Teilnahme an der Studie erfolgte freiwillig. Eingeschlossen wurden Studierende im 10. Semester, welche bereits die Vorlesungen der Augenheilkunde besucht hatten und das Augenheilkundepraktikum im Rahmen ihres klinischen Studienabschnittes durchliefen. Es wurde bei allen Studienteilnehmern vor Einschluss in die Studie deren Einverständniserklärung eingeholt. Die Studienteilnehmer wurden darüber aufgeklärt, dass ihre Zeit im EyesiNet pseudonymisiert gemessen und mit den Ergebnissen an den Simulatoren korreliert wird. Am ersten Praktikumstag hörten die Studierenden zunächst einen 10minütigen Einführungsvortrag über die grundsätzliche Technik der Untersuchungen und bekamen eine kurze Demonstration der Simulatoren. Über eine Gesamtzugangszeit von 2 h konnten sie anschließend Fälle am Simulator untersuchen. Mithilfe ihres individuellen Zugangscodes konnten sie sich auf der Online-Plattform EyesiNet auf freiwilliger Basis begleitend weiter mit den dort aufgezeigten Pathologien beschäftigen. Am letzten Praktikumstag wurden am Simulator (Zugangszeit erneut 2 h) die erlernten Kenntnisse überprüft und die praktischen Fähigkeiten weiter vertieft. Bei jeder Einheit wurden den Studierenden dabei zufällig ausgewählte Fälle vorgestellt. Am Ende jedes Falles wurde am Simulator ein „Quiz“ mit Multiple-Choice-Fragen, Befundung und Diagnose bearbeitet, das sich auf in EyesiNet behandelte Inhalte bezog. Nach Bearbeitung des jeweiligen Falles wurde den Studierenden ihr Befundungsergebnis aufgezeigt (Abb. a, b), und sie hatten die Möglichkeit, den virtuellen Patienten nochmals zu spiegeln, um sich die Blickdiagnosen besser einprägen zu können. Mittels eines Fragebogens wurden EyesiNet sowie das Simulatortraining von den Studierenden nach Absolvierung ihres Praktikums bewertet.
Die Gesamtbewertung basiert auf folgenden Kriterien, die während der Untersuchung berechnet werden: Lichtbelastung („light exposure“), Gesamtzeit der Untersuchung („examination time“), Fläche der untersuchten Netzhaut („examined retina“), Befunde („classification“) und Diagnose („diagnosis“) in Form von Multiple-Choice-Fragen . Lichtbelastung: Hier wird die Dauer von zu starkem und damit schädigendem Licht auf der Retina gemessen. Gesamtzeit der Untersuchung: Dies ist die Zeit, die ein Student für die Untersuchung benötigt. Bei der Untersuchungsdauer werden die Minuten gezählt. Fläche der untersuchten Netzhaut: Hier wird die untersuchte Fläche der Retina berechnet. Dabei wird die Retina als Fläche mit dem Wert 100 % angenommen und die relative Ausleuchtung berechnet. Multiple-Choice-Fragen (Befunde und Diagnosen): Hier wird zuerst anhand der richtigen und falschen Angaben der aktuelle Wert ermittelt, der dann zur Berechnung der Punktzahl dient: [12pt]{minimal}
$$\,Wert=\,\,-\,\,}{\,\,\,}$$ Aktueller Wert = Anzahl richtige Antworten - Anzahl falsche Antworten vorgegebene Gesamtanzahl richtiger Antworten Für jedes Bewertungskriterium sind Wertebereiche und Punktzahlen definiert, um den Messwert in eine Punktzahl zu transformieren. Dies wird gemäß der folgenden Formel linear interpoliert: [12pt]{minimal}
$$\,Wert=\,Wert-}{-}$$ Relativer Wert = Aktueller Wert - Startwert Endwert - Startwert [12pt]{minimal}
$$=+\,(-\,)$$ Punktzahl = Startpunktzahl + Relativer Wert * Endpunktzahl - Startpunktzahl Falls der aktuelle Wert außerhalb des Wertebereichs liegt, wird stattdessen die Minimal- bzw. Maximalpunktzahl angenommen. Da keine negativen Punkte vergeben werden, erhalten die Studierenden am Ende eine Gesamtpunktzahl von mindestens 0 Punkten und höchstens 100 Punkten (Abb. ). Die Werte- und Punktebereichte für die einzelnen Bewertungskriterien sind in Tab. dargestellt und mit einem Musterprobanden versehen.
In dieser nicht randomisierten prospektiven Studie wurde gemäß den Vorgaben der Ethikkommission keine Kontrollgruppe gebildet, sodass alle Studierenden gleichermaßen die Möglichkeit hatten, sich in EyesiNet weiterzubilden. Ob und wie lange die Studierenden die Plattform nutzten, wurde ihnen auf freiwilliger Basis selbst überlassen. So konnte bei Abschluss der Studie eine nicht randomisierte Kontrollgruppe gebildet werden mit denjenigen Studierenden, die die Plattform nicht nutzten (Gruppe OHNE Training). Die Daten wurden mithilfe der individuellen Zugänge an den Eyesi-Simulatoren erfasst und in Microsoft Excel 2016 sowie in BiAS Version 11.12 für Windows (epsilon-Verlag, Dr. rer. med. Hanns Ackermann, Goethe-Universität Frankfurt, Deutschland) ausgewertet. Zum Vergleich, ob eine signifikante Verbesserung durch die EyesiNet-gestützte Weiterbildung erreicht werden konnte, wurde bei Vorlage einer nicht parametrischen Datenlage ein Wilcoxon-matched-pairs-Test für beide Gruppen angewendet sowie deren Effektstärke nach Rosenthal bewertet. Die Abhängigkeit der Verbesserung von der in EyesiNet verbrachten Zeit wurde mittels Spearman-Rangkorrelation überprüft. Zum Vergleich der Vor-Werte beider Gruppen wurde ein Wilcoxon-Mann-Whitney-U-Test angewendet, um zeigen zu können, dass beide Gruppen die gleichen Startvoraussetzungen nach Besuch der Vorlesung und der Klausur hatten.
Es konnten insgesamt 86 Studierende ausgewertet werden, wovon 32 Studierende das freiwillige Angebot nutzten und in ihrer freien Zeit während des Praktikums an der Online-Plattform EyesiNet trainierten (Gruppe MIT Training). Die aufgezeichnete Aktivität ergab, dass im Durchschnitt 28-mal (min. 1, max. 85) die Übersichtsseiten der 14 Pathologien aufgerufen wurde. Die Unterseiten (mit Detailinformationen) wurden im Durchschnitt von den Trainierenden 14-mal (min. 0, max. 42) aufgerufen. Aus den Nicht-Trainierenden ( n = 54) konnte bei der Auswertung eine nicht randomisierte Kontrollgruppe gebildet werden (Gruppe OHNE Training). Ein Loss-to-Follow-up trat bei 14 Studierenden auf.
Ergebnisse Eyesi Direct Prüft man die Startvoraussetzungen der beiden Gruppen (OHNE vs. MIT Training) mittels Wilcoxon-Mann-Whitney-U-Test, so zeigt sich, dass beide Gruppen keinen signifikanten Unterschied aufwiesen ( p = 0,29). Dies bedeutet, dass keine der beiden Gruppen vor dem Training mit EyesiNet einen Wissensvorsprung hatte. Von den n = 54 Studierenden, welche nicht am EyesiNet-Training teilnahmen, wurden am Beginn des Praktikums 141 Fälle und am Ende des Praktikums insgesamt n = 138 Fälle am Eyesi Direct bearbeitet. Dabei lag bei einer Gesamtpunktzahl von 100 Punkten vor dem Training der Median bei 37 Punkten, danach konnte eine Steigerung auf einen Median von 44 Punkten erreicht werden. Beim Test der Nullhypothese konnte mit p = 0,02 im Wilcoxon-Matched-Pairs-Test eine signifikante Verbesserung mit einer Effektgröße von 0,1 festgestellt werden. Dies entspricht nach Rosenthal einem geringen Effekt. Von den n = 32 Studierenden, welche am EyesiNet-Training teilnahmen, wurden am Beginn des Praktikums 93 Fälle und am Ende des Praktikums n = 83 Fälle am Eyesi Direct bearbeitet. Dabei lag bei einer Gesamtpunktzahl von 100 Punkten vor dem Training der Median bei 35 Punkten, danach konnte eine Steigerung auf einen Median von 45 Punkten aufgezeigt werden. Beim Test der Nullhypothese konnte mit p = 0,0004 im Wilcoxon-Matched-Pairs-Test eine hoch signifikante Verbesserung mit einer nach Rosenthal mittleren Effektgröße von 0,3 festgestellt werden. Die Zeit des Trainings am EyesiNet korreliert dabei nach der Spearman-Rang-Korrelation mit p = 0,05 (Korrelationskoeffizient rho= 0,36) mit der Verbesserung am Eyesi Direct (Gesamtpunktzahl nachher – Gesamtpunktzahl vorher). Ergebnisse Eyesi Indirect Im Wilcoxon-Mann-Whitney-U-Test zeigt sich, dass beide Gruppen zu Beginn des Praktikums wie im Eyesi Direct keinen signifikanten Unterschied in den Ergebnissen am Eyesi Indirect aufwiesen ( p = 0,10). Von den n = 54 Nicht-Trainierenden wurden am Beginn des Praktikums 147 und am Ende des Praktikums 133 Fälle am Eyesi Indirect bearbeitet. Dabei lag bei einer Gesamtpunktzahl von 100 Punkten vor dem Training der Median bei 22 Punkten, danach konnte eine minimale Steigerung auf einen Median von 23 Punkten erreicht werden. Dies entspricht mit p = 0,41 im Wilcoxon-Matched-Pairs-Test keiner signifikanten Verbesserung im Verlauf. Von den n = 32 Trainierenden wurden am Beginn des Praktikums 87 Fälle und am Ende des Praktikums n = 85 Fälle am Eyesi Indirect bearbeitet. Dabei lag bei einer Gesamtpunktzahl von 100 Punkten vor dem Training der Median bei 25 Punkten, danach konnte eine Steigerung auf einen Median von 26 Punkten erreicht werden. Damit konnte mit p = 0,17 im Wilcoxon-Matched-Pairs-Test keine signifikante Verbesserung aufgezeigt werden. Nach der Spearman-Rang-Korrelation korreliert die Zeit des EyesiNet-Trainings mit p = 0,12 knapp nicht mit der Verbesserung am Eyesi Indirect (Gesamtpunktzahl nachher – Gesamtpunktzahl vorher).
Prüft man die Startvoraussetzungen der beiden Gruppen (OHNE vs. MIT Training) mittels Wilcoxon-Mann-Whitney-U-Test, so zeigt sich, dass beide Gruppen keinen signifikanten Unterschied aufwiesen ( p = 0,29). Dies bedeutet, dass keine der beiden Gruppen vor dem Training mit EyesiNet einen Wissensvorsprung hatte. Von den n = 54 Studierenden, welche nicht am EyesiNet-Training teilnahmen, wurden am Beginn des Praktikums 141 Fälle und am Ende des Praktikums insgesamt n = 138 Fälle am Eyesi Direct bearbeitet. Dabei lag bei einer Gesamtpunktzahl von 100 Punkten vor dem Training der Median bei 37 Punkten, danach konnte eine Steigerung auf einen Median von 44 Punkten erreicht werden. Beim Test der Nullhypothese konnte mit p = 0,02 im Wilcoxon-Matched-Pairs-Test eine signifikante Verbesserung mit einer Effektgröße von 0,1 festgestellt werden. Dies entspricht nach Rosenthal einem geringen Effekt. Von den n = 32 Studierenden, welche am EyesiNet-Training teilnahmen, wurden am Beginn des Praktikums 93 Fälle und am Ende des Praktikums n = 83 Fälle am Eyesi Direct bearbeitet. Dabei lag bei einer Gesamtpunktzahl von 100 Punkten vor dem Training der Median bei 35 Punkten, danach konnte eine Steigerung auf einen Median von 45 Punkten aufgezeigt werden. Beim Test der Nullhypothese konnte mit p = 0,0004 im Wilcoxon-Matched-Pairs-Test eine hoch signifikante Verbesserung mit einer nach Rosenthal mittleren Effektgröße von 0,3 festgestellt werden. Die Zeit des Trainings am EyesiNet korreliert dabei nach der Spearman-Rang-Korrelation mit p = 0,05 (Korrelationskoeffizient rho= 0,36) mit der Verbesserung am Eyesi Direct (Gesamtpunktzahl nachher – Gesamtpunktzahl vorher).
Im Wilcoxon-Mann-Whitney-U-Test zeigt sich, dass beide Gruppen zu Beginn des Praktikums wie im Eyesi Direct keinen signifikanten Unterschied in den Ergebnissen am Eyesi Indirect aufwiesen ( p = 0,10). Von den n = 54 Nicht-Trainierenden wurden am Beginn des Praktikums 147 und am Ende des Praktikums 133 Fälle am Eyesi Indirect bearbeitet. Dabei lag bei einer Gesamtpunktzahl von 100 Punkten vor dem Training der Median bei 22 Punkten, danach konnte eine minimale Steigerung auf einen Median von 23 Punkten erreicht werden. Dies entspricht mit p = 0,41 im Wilcoxon-Matched-Pairs-Test keiner signifikanten Verbesserung im Verlauf. Von den n = 32 Trainierenden wurden am Beginn des Praktikums 87 Fälle und am Ende des Praktikums n = 85 Fälle am Eyesi Indirect bearbeitet. Dabei lag bei einer Gesamtpunktzahl von 100 Punkten vor dem Training der Median bei 25 Punkten, danach konnte eine Steigerung auf einen Median von 26 Punkten erreicht werden. Damit konnte mit p = 0,17 im Wilcoxon-Matched-Pairs-Test keine signifikante Verbesserung aufgezeigt werden. Nach der Spearman-Rang-Korrelation korreliert die Zeit des EyesiNet-Trainings mit p = 0,12 knapp nicht mit der Verbesserung am Eyesi Indirect (Gesamtpunktzahl nachher – Gesamtpunktzahl vorher).
Die Auswertung der einzelnen Bewertungskriterien, welche zur Gesamtbewertung am Eyesi Direct und Indirect führen, ist in Tab. und dargestellt. Hier befinden sich die Studierenden immer in der erlaubten Gesamtzeit. Von Beginn an wird sowohl am Eyesi Direct als auch am Eyesi Indirect mit einer geringen Lichtbelastung für die Netzhaut gearbeitet. Während die Ergebnisse der Fläche der untersuchten Retina am Eyesi Direct im oberen Bereich liegen, liegen die Ergebnisse am Eyesi Indirect deutlich darunter. Eine Steigerung der Punktzahl kann v. a. im Bereich der Befundung und Diagnosestellung aufgezeigt werden.
Nach Auswertung der Fragebögen zeigt sich, dass die Mehrheit der Studierenden sowohl das Training am Simulator als auch an EyesiNet überzeugt hat (Abb. ) und sie auch subjektiv das Gefühl hatten, dadurch ihre ophthalmologischen Kenntnisse verbessern zu können. Zudem zeigte sich, dass das Interesse an dem im Studium eher kleinen Fachgebiet „Ophthalmologie“ durch ein an die Bedürfnisse der Studierenden angepasstes Training weiter gesteigert werden kann.
Es konnte gezeigt werden, dass die Kombination aus praktischem Training an Simulatoren und begleitender, in das praktische Training auf geeignete Weise verwobener Theorie im Rahmen von Online-Plattformen nicht nur subjektiv das Interesse und die ophthalmologischen Kenntnisse der Studierenden steigert, sondern auch nachweislich zu besseren Ergebnissen in der Ausbildung führt. Gerade die praktischen Fertigkeiten dürfen im Rahmen eines Medizinstudiums nicht zu stark in den Hintergrund rücken. Oft fehlt den Assistenzärzten dann in der Praxis die Fähigkeit, ihr theoretisches Wissen anzuwenden, die erforderlichen diagnostischen Tätigkeiten selbst umzusetzen und nach Erstellung der richtigen Diagnose die zutreffenden Behandlungsmethoden einzuleiten. Anhand der Simulation erhalten die Studierenden am ersten Praktikumstag ein objektives Bild über ihren aktuellen Wissensstand in der Ophthalmologie und durch das Erlernen der Fertigkeiten auch die Motivation, dadurch bei genügend theoretischem Wissen auch selbstständig die richtige Diagnose stellen zu können. Wie sich anhand der oben genannten Ergebnisse zeigt, ist gerade die direkte Ophthalmoskopie eine relativ einfache Untersuchungsmethode, die verstärkt im Unterricht der Studierenden eingesetzt werden sollte. Hier ist bereits ein Lerneffekt allein durch die Praxis am Simulator für die direkte Ophthalmoskopie nachweisbar. Deutlich höher fällt er jedoch aus, wenn die Studierenden in der Zwischenzeit die Befunde an der Online-Plattform aufgearbeitet haben. Das Erlernen dieser Fertigkeiten zeigt sich als wesentlich einfacher als das Erlernen der indirekten Ophthalmoskopie und sollte gefördert werden, da die direkte Ophthalmoskopie in der späteren beruflichen Laufbahn auch von Internisten, Pädiatern und Hausärzten einfach durchgeführt werden kann. Sie kann dabei helfen, Notfälle zu filtrieren (wie u. a. Zentralarterienverschluss, Stauungspapille …) und den richtigen Fachdisziplinen zuzuordnen . Bei der indirekten Ophthalmoskopie konnte keine signifikante Verbesserung während des Praktikums erreicht werden: Die Ergebnisse am Eyesi Indirect sind sowohl beim primären als auch sekundären Simulationstraining bei beiden Gruppen deutlich schlechter als die Trainingsergebnisse am Eyesi Direct, was die Vermutung nahelegt, dass bei der indirekten Ophthalmoskopie eine bedeutend höhere Lernzeit bei der Anwendung benötigt wird, um Strukturen darzustellen und dann auch richtig bewerten zu können. Dies liegt mutmaßlich daran, dass die Studierenden erst ein Gefühl für die Augen-Hand-Koordination sowie für das invertierte Bild entwickeln müssen. Die hier vorhandene Trainingszeit von insgesamt 4 h kann aber den Studierenden zumindest helfen, ein Gefühl für diese Methodik zu entwickeln und ggf. weiteres Interesse an der Augenheilkunde wecken und damit auch zur Entscheidungsfindung für die individuell richtige Fachdisziplin beitragen. Die Kombination, dies mit jederzeit zugänglichen Online-Inhalten zu verknüpfen (welche z. B. auch über Mobiltelefon und Tablet zugänglich sind), ergibt die Möglichkeit, dass die Studierenden auch nach dem Praktikum immer wieder auf diese Informationen zurückgreifen und die erarbeiteten Lerninhalte als eine Art Nachschlagewerk nutzen können. Dies kann z. B. dann helfen, wenn Studierende ein Erkrankungsbild zwar als bekannt erkennen, aber die Zuordnung zur Pathologie nicht mehr herstellen können. Dass diese Funktion genutzt wird, konnte in dieser Studie aufgezeigt werden. So wurde das Portal auch nach dem abgeschlossenen Praktikum von 10 der 32 Studierenden in der Trainierenden-Gruppe weiter aufgerufen. Dagegen wurde das Portal nur von einer studierenden Person aus der Gruppe derjenigen genutzt, die sich in der Praktikumszeit nicht mit EyesiNet beschäftigt hatten. Eine Schwäche dieser Studie ist die begrenzte Zeit, die den Studierenden in 1 Woche zur Erlernung dieser Fähigkeiten zur Verfügung steht. Dementsprechend ist es überraschend, dass in der direkten Ophthalmoskopie trotzdem eine signifikante Verbesserung der Ergebnisse erreicht werden konnte. Es konnte jedoch nur eine studierende Person nach dem Training am Eyesi Direct die Maximalpunktzahl von 100 Punkten in einem Fall erreichen. Die Maximalpunktzahl am Eyesi Indirect waren 88 Punkte. Ausbildungsziel sollte sein, einen Großteil der Studierenden auf dieses hohe Niveau bringen zu können. Hierbei ist der Lernaufwand eines jeden Studierenden individuell einzustufen. Gerade aber hierfür sind Simulatoren vorteilhaft, da sie den Studierenden ermöglichen, nach Beendigung ihres Praktikums eigenständig weiter zu trainieren. Hierfür bieten wir den Studierenden Trainingszeiten an den Simulatoren an – und natürlich auch die Möglichkeit, im Wahlfach Ophthalmologie weiter ihre Kenntnisse zu vertiefen. Mit n = 14 kam es zu einer relativ hohen Loss-to-Follow-up-Rate. Diese Rate basiert unter anderem auf dem sehr umfangreich curricular verankerten Ausbildungsprogramm. So kam es zu Überschneidungen mit anderen Fächern, was dazu führte, dass einige der Studierenden ihren Fehltag am letzten Praktikumstag nahmen. Dies kann neben mangelndem Interesse auch der Grund dafür sein, dass es nicht zu einer idealen Teilnahme der Studierenden an dem freiwilligen Angebot der Online-Courseware EyesiNet kam. Von den 86 Studierenden machten nur 32 Studierende davon aktiven Gebrauch. Es ist zu überlegen, ob das bisher freiwillige Online-Training in einen Pflichtteil während des Praktikums umstrukturiert werden sollte, sodass alle Studierenden auf den gleichen Stand gebracht werden können. Des Weiteren muss stärker auf eine sinnvolle Anordnung der Praktika und Prüfungen geachtet werden. Hierbei werden wir auch dem Wunsch der Studierenden nachkommen und in Zukunft die Online-Plattform bereits während der Vorlesungszeiten zur Verfügung stellen, sodass diese sowohl zur Klausur- als auch zur Praktikumsvorbereitung genutzt werden kann.
Die Kombination aus realistischer Simulation und den dazu passenden Lerninhalten auf Online-Plattformen ist motivierend und effizient. Sie führt sowohl subjektiv als auch objektiv zu verbesserten ophthalmologischen Kenntnissen. Es ergibt sich daraus die Möglichkeit, die Ausbildung sowohl vor, während als auch nach der Absolvierung des Praktikums fortzuführen, da der Zugang den Studierenden weiterhin zur Verfügung steht und das „Modul“ sowohl vor dem abschließenden Examen mit in die Vorbereitungen integriert werden kann oder als mobile App im Berufsleben jederzeit und an jedem Ort wieder aufrufbar ist. An der Augenklinik der Goethe Universität Frankfurt am Main ist die Kombination aus dem Angebot des Online-Trainings in Kombination mit der Simulationsausbildung ein fester Bestandteil des Praktikums geworden und soll in einem weiteren Lehrantrag auf die Befundung am Vorderabschnitt ausgeweitet werden. In Zeiten der COVID-19-Pandemie kann durch Simulatortraining eine sichere Umgebung für praktische Übungen geschaffen werden. Überdies können mit EyesiNet auf digitale Weise interaktive Zusatzinformationen zu den Lehrbüchern zur Verfügung gestellt werden. Bei Bedarf bieten wir an, EyesiNet auch anderen Weiterbildungshäusern zur Verfügung zu stellen.
|
NIPAT as Non-Invasive Prenatal Paternity Testing Using a Panel of 861 SNVs | b7619d70-c719-4f79-8bc8-b4dd8afdce27 | 9957069 | Forensic Medicine[mh] | To date, diagnostic genetic testing of the fetus during early pregnancy requires invasive procedures such as Chorionic Villus Sampling (CVS) and amniocentesis (also called amnio) associated with miscarriage risk. In 1997, it was discovered that maternal plasma contains cell-free fetal DNA (cffDNA) . Most cffDNA comes from villous cells, with its concentration increasing proportionally with gestational age, enabling the chance to obtain fetal genetic information from maternal plasma. Fetal cfDNA has an average length of 150 bp (Base Pair), and comprises fragments that are shorter on average than maternal cell-free DNA. It is released by apoptotic cells in trophoblasts. Placental trophoblasts and fetuses develop from the same blastocyst and therefore share the same genome, promoting the utility of cfDNA to test fetal DNA. The placenta releases significant levels of fetal DNA into the maternal circulation, with concentrations of fetal DNA in maternal plasma showing levels of 10–20% between 10 and 20 weeks of gestation. It is well known that circulating cffDNA has a mean half-life of 16.3 minutes (min) and is undetectable in maternal plasma 2 hours post-delivery, indicating that cffDNA testing cannot be affected by carryover from previous pregnancies . The advent of Next Generation Sequencing (NGS) technology, and therefore the ability to analyze sources of DNA, led to the development of several prenatal genetic tests proposed as Non-Invasive Prenatal Screening (NIPT or NIPS) . The main advantage of using cffDNA is the non-invasive nature of the test compared to traditional procedures. NIPT is currently being conducted globally, with more than 10 million tests having been performed in 2018, and many countries are already using NIPT in their routine . Since then, cffDNA has also been investigated as a source of fetal DNA for Non-Invasive Prenatal Paternity Testing (NIPPT or NIPAT). In particular, parental assessment is one of the central aspects of forensic genetics . These analyses are performed by using genetic biomarkers characterized by high variability. The first genetic biomarkers to be used for human paternity testing were Short Tandem Repeats (STRs) . In addition, a new class of genetic biomarkers, which can be used for parental assessment and various forensic applications, are Single Nucleotide Polymorphisms (SNPs). Compared with STR loci, SNP sites have a lower mutation rate and the amplification products of single SNP sites can be very short, which makes SNPs suitable for the analysis of highly degraded forensic samples . Moreover, SNPs are related to multiple phenotypes, such as skin color, eye color, hair color, ethnicity information and susceptibility to multifactorial disorders [ , , , ]. Although many non-invasive tests have been developed so far, few data are available regarding the reliability and reproducibility of these methods [ , , ]. In this article, we present a Non-Invasive Prenatal Paternity Test (NIPAT) analyzing 861 Single Nucleotide Variants (SNV) from cffDNA through Ion S5 NGS technology. The technology selected is already used in many laboratories for forensic next generation sequencing protocols and several commercially available kits have been validated . The test was validated on more than 900 meiosis samples. NIPAT generated log(CPI) (Combined Paternity Index) values for designated fathers ranging from + 34 to + 85, whereas log(CPI) values calculated for unrelated individuals were below −150. Finally, the performance of NIPAT was fairly concordant with paternity compatibility threshold log(CPI) > +4, and paternity exclusion threshold log(CPI) < −4 suggested by the reference guidelines and reference literature for SNV approaches .
2.1. Selection of Samples and NGS Analysis The samples were recruited from the molecular genetic laboratory Eurofins Genoma and signed informed consent was obtained from all of the participants before blood sample collection. Peripheral blood samples (10 mL) were collected from nine pregnant women during the first trimester of pregnancy. Buccal swabs or peripheral blood samples were collected from their partners . In particular, maternal peripheral blood samples were used for the extraction of cell-free fetal DNA and subsequently employed to perform GeneSafe ® . Maternal genomic DNA was employed to perform GeneScreen ® testing. GeneSafe ® is a non-invasive prenatal test, based on NGS technology, that allows the identification of pathogenic/likely pathogenic variants involved in inherited and de novo single-gene disorders. On the other hand, GeneScreen ® is a carrier screening test performed by targeted sequencing. These genetic tests were employed for the sample selection of the nine “mother – designated father” sample couples; only samples couples indicating fetuses with pathogenic/likely pathogenic variants transmitted by the father were selected for the genetic confirmation of parental relationship. Maternal plasma was separated from the peripheral blood by centrifugation at 1600 RCF (Relative Centrifugal Force) with a temperature of 4 °C for 10 min. Subsequently, the supernatant was transferred to a new tube and it was centrifuged for an additional 10 min at 16,000 RCF with a temperature of 4 °C. cffDNA was extracted again using a QIAsymphony ® DSP Circulating DNA Kit (QIAGEN, Hilden, Germany) and QIAsymphony Automatic Extraction System (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. The QIAsymphony ® DSP Circulating DNA Kit is based on magnetic-particle technology for the automated isolation and purification of human circulating cffDNA. Furthermore, the QIAsymphony DSP circulating DNA Kit is a ready-to-use system for the qualitative purification of human circulating cell-free DNA from human plasma. Genomic DNA (gDNA) from paternal samples was extracted with a Qiagen DNA Mini Kit. A custom PCR amplification panel was designed through Thermo Fisher Ion Ampliseq Designer ( www.ampliseq.com , accessed on 6 July 2021) using a set of 861 SNVs well-documented on dbSNP . Single nucleotide variant selection was based on the following criteria: all SNVs had to have available population genetics data from dbSNP, 1000 genomes and/or FrogKB databases. We excluded variants labeled as indel (insertion and deletion), Multiple Nucleotide Variants (MNV)/complexes, and those that were pathogenic or likely pathogenic, and we excluded variants in highly repeated regions or in pseudogenes. The resulting SNVs were selected due to having MAF (Minor Allele Frequency) > 0.3 in at least one population, and/or were manually selected to optimize the chance of discriminating between populations to be spread across all the chromosomes (chr). In total, 638 of the 861 selected single nucleotide variants were biallelic and 223 were triallelic in the dbSNP database. The average MAF of the panel excluding triallelic SNVs was 0.321 (median in 0.347) and the number of SNVs per chromosome ranged from 12 to 95, with an average of 36. In particular, the numbers of SNVs for each chromosome were: chr1:41, chr2:61, chr3:36, chr4:95, chr5:34, chr6:60; chr7:25, chr8:74, chr9:33, chr10:26, chr11:35, chr12:33, chr13:24, chr14:31, chr15:33, chr16:35, chr17:39, chr18:23, chr19:12, chr20:29, chr21:17, chr22:17, chrX:23 and chrY:25. The extracted cffDNA from the maternal plasma samples, as well as gDNA from the paternal samples, have been parallelly used for library preparation using the Ion AmpliSeq™ Library Kit PLUS (Thermo Fisher Scientific, Foster City, CA, USA). This kit is engineered for the rapid preparation of amplicon libraries. The Ion AmpliSeq™ Library Kit Plus is an on-plate format to facilitate sample processing, traceability and compatibility with automation. The Ion AmpliSeq™ Library Kit Plus provides high, uniform, reliable and reproducible output. Sequencing maternal cffDNA samples requires much more read depth compared to paternal gDNA samples because the evaluation of the presence of low-frequency alleles in cffDNA samples is necessary to determine the fetal genotype. For this reason, the last step of the library preparation is crucial to balance correctly maternal and paternal samples in the same pool. In fact, in order to pool cffDNA and gDNA amplified samples in order to obtain the proper number of reads, it is very important to load them 20:1 (in terms of nanograms), respectively. The entire pool is then quantified, diluted to 100 pM (parts per million), and finally processed with an Ion Chef™ Instrument for the templating and enrichment procedures. In particular, the Ion Chef™ System reduces sources of user-introduced variability and supports sequencing preparation for the Ion S5™ System. A final 500 flows sequencing has been performed using the Ion 540™ Chip running on the Ion S5™ System (Thermo Fisher Scientific, Foster City, CA, USA). The Ion S5 System is a semiconductor system which allows different sequencing workflows. The Thermo Fisher Scientific S5 sequencing platform automatically performs a set of next generation sequencing reads and quality checks statistics pre- and post-alignment with the hg19 human reference genome. Statistics includes read length histograms, chip-loading-density percentage, total number of mapper and unmapped reads, and similar self-explanatory statistics. The most useful ones are the “on target”, “uniformity”, and “mean depth” statistics from the Thermo Fisher Scientific “coverage analysis” plugin. They represent the alignment statistics for each amplicon of the panel, where the “on target” statistic represents the percentage of reads aligning in correspondence of an amplicon included in the panel; the “uniformity” statistic represents the percentage of the amplicon bases covered by at least 0.2 × the average base reads depth; and the “mean depth” statistic represents the average base coverage depth over all bases targeted in the reference. To be evaluable, we expect samples to have “on target” value > 90%, “uniformity” value > 90%, and “mean depth” > 8000 for cffDNA samples (and > 400 for gDNA samples). The Thermo Fisher Scientific system produces a BAM file for each sample, containing all the aligned reads. This file is exportable from the system and can be submitted for further bioinformatics analysis. In this study, the BAM files obtained from the S5 instrument have been analyzed using the NIPAT-flow data analysis pipeline developed by the Eurofins Genoma Group. 2.2. NIPAT-Flow Algorithm The algorithm evaluates the compatibility of each maternal cell-free fetal DNA sample against each alleged father. The fetal genotypes for each SNV included in the analysis are inferred from the maternal samples. Furthermore, the algorithm robustness has been validated using a set of mock samples generated by simulating 100 biological brothers for each biological father. The algorithm utilizes the BAM files obtained from the next-generation sequencing process, and it produces some intermediate reports (one for each mother vs. alleged father comparison) and a final overall report including the paternity probability (W) for each comparison. The evaluation is straightforward from the Combined Paternity Index (CPI likelihood statistic) adapted to be used in the context of an SNV-based prenatal test. A kinship relationship is universally evaluated by comparing the likelihoods of observing the obtained genotypes given two alternative hypotheses (i.e., the Likelihood Ratio, LR). In the case of paternity testing, it is evaluated whether an individual is related to another individual with a father–son relationship versus the hypothesis that the two individuals are not related. The higher the LR, the more supported is the first hypothesis (paternity). The lower the LR (i.e., <1), the more supported is the second hypothesis (unrelated individuals). For each SNV, the Paternity Index (PI) is classically calculated as a likelihood ratio according to the Bayesian theorem . PI is defined as the ratio between the probability of the fetal genotype to be the observed one (event E) conditioned to the alleged father being the biological father (hypothesis H 1 ) and the same probability where the father is a random individual extracted from the population (hypothesis H 2 ) (PI = Pr(E | H 1 )/Pr(E | H 2 ) . When multiple loci are used to determine paternity, the product of all the individual PI values for each locus is the combined paternity index. PI formulas are adapted to the cases of prenatal tests where fetal genotypes need to be inferred from the maternal cffDNA samples [ , , ]. In particular, this was undertaken by also taking into account technical errors and natural effects (e.g., sequencing errors or ex novo mutations) that could lead to fetal genotype misinterpretation and eventually to a biased PI calculation. The CPI for a couple is then the product of all the PIs—one for each SNV included in the analysis—and paternity probability (W) is calculated as (CPI/CPI + 1)*100. Only SNVs where the mother genotype is homozygous have been included in CPI calculation because for heterozygous maternal positions it is statistically inaccurate to infer the fetal genotype from the maternal cell-free fetal DNA sample only . The algorithm defines the fetal genotype from the maternal cffDNA sample using a set of fetal base thresholds. In fact, for each SNV, if a low-frequency base is detected on the maternal sample, and it is different from the maternal homozygous genotype, the fetal genotype is inferred as heterozygous. In particular, a minimal coverage of 1000 reads for the maternal sample and 100 reads for the alleged father are required for the inclusion in the Paternity Index calculations. A base coverage of at least 100 reads is required to be assigned as fetal for allele characterized by frequency ranging from 1.5% to 15%. In the case of variants located on chromosome Y, the maternal zygosity filter is obviously inapplicable; however, for a coverage > 100 reads, the filter is still applied. In the end, the number of SNVs reporting a low-frequency allele varies among different meiosis samples, ranging from 131 to 173 with an average of 145 SNVs. Multiple checks and features are included in the algorithm to improve the robustness against both human and technological errors. The algorithm performs an assessment of relatedness indexes among all different sample pairs . Some thresholds are set in the algorithm to deal with noise and low coverage. In particular, the algorithm includes a noise reduction method to ensure a more robust call for the fetal base, starting from the mother’s genotype. Fetal genotype calling implements a SNV-specific threshold for the low-frequency alleles which relies on previously collected data of low-frequency alleles on samples without cffDNA. As a support, CPI is also calculated using a different set of thresholds optimized for low-fetal-fraction samples. Robustness of the CPI calculation is also assured by the usage of no less than 30 SNVs reporting a low-frequency allele (1.5–15%). 2.3. Simulating Father’s Brothers For each compatible couple, 100 synthetic brothers of the designated father were simulated (for a total of 900 simulate samples) to evaluate the performances of NIPAT-flow on individuals whose genetic profile was closely related to the real biological father. A two-step probabilistic model was designed to define the synthetic sample’s genotype for each SNV. Each father’s brother is then a sample drawn from this model. Given the designated father, a couple of synthetic parents was first sampled from an inferred probability distribution and then a synthetic son of theirs was generated using the equiprobable combination of their genotypes. In more detail, a Bayesian approach was used to infer the parents’ genotypes. For each SNV, the vector of the MAFs values was taken as the prior probability distribution. This quantity was updated considering the profile of the designated father, using the likelihood of its genotype conditioned to his parents’. The normalized product of these two quantities is a posterior probability distribution from which the genotypes of the parents are sampled. This probabilistic model allows us to create individuals sharing a major part of their genetic profile with the designated father. Defining concordance between two individuals as the percentage of SNVs showing an identical genotype over the total number of SNVs, the brothers showed, on average, a concordance of 68.6% with the designated father. This percentage varies equally both across the different brothers and different individuals (64.0–72.8%). As expected, the concordance between unrelated individuals was lower, ranging from 18.9% to 56.4% (average 36.1%). 2.4. Statistical Analysis To evaluate the reliability and robustness of NIPAT, couples originating from the maternal samples and biological fathers ( n = 9), unrelated fathers ( n = 72), and simulated brothers ( n = 900) were tested by comparing the log(CPI) distributions between groups. At first, the log(CPI) parent distribution was tested for normality using a Shapiro–Wilk test and evaluated with a skewness–kurtosis plot for empirical distribution . This plot combines information about skewness and kurtosis, which are measures of the shape of a distribution. A skewness–kurtosis plot can help decide whether a parametric or non-parametric statistical test is appropriate by determining whether the distribution of the data is normal or non-normal. Bootstrapping was used (nboot = 100) to test the stability of the skewness and kurtosis statistics when data were resampled, finally ensuring evaluation reliability. As the distribution of log(CPI) was identified as non-normal, a non-parametric statistical test was chosen, considering that the inappropriate use of a parametric test would have given biased results. A Kruskal–Wallis rank sum test was used to test if there was a difference in log(CPI) between biological fathers, unrelated couples, and simulated brothers. Post hoc tests were then performed with the two-tailed Wilcoxon test for two independent samples and p-values were corrected for multiple testing using the false discovery rate correction. We set α = 0.0001, corresponding to a confidence level (1 − α) of 99.99%. This highly restrictive confidence level was set to reduce the risk of incurring a false positive result. Moreover, to further confirm our result, we decided to compute a 99.99% confidence interval for the true mean of the simulated brothers log(CPI) distribution. The statistical analysis was performed with the R programming language .
The samples were recruited from the molecular genetic laboratory Eurofins Genoma and signed informed consent was obtained from all of the participants before blood sample collection. Peripheral blood samples (10 mL) were collected from nine pregnant women during the first trimester of pregnancy. Buccal swabs or peripheral blood samples were collected from their partners . In particular, maternal peripheral blood samples were used for the extraction of cell-free fetal DNA and subsequently employed to perform GeneSafe ® . Maternal genomic DNA was employed to perform GeneScreen ® testing. GeneSafe ® is a non-invasive prenatal test, based on NGS technology, that allows the identification of pathogenic/likely pathogenic variants involved in inherited and de novo single-gene disorders. On the other hand, GeneScreen ® is a carrier screening test performed by targeted sequencing. These genetic tests were employed for the sample selection of the nine “mother – designated father” sample couples; only samples couples indicating fetuses with pathogenic/likely pathogenic variants transmitted by the father were selected for the genetic confirmation of parental relationship. Maternal plasma was separated from the peripheral blood by centrifugation at 1600 RCF (Relative Centrifugal Force) with a temperature of 4 °C for 10 min. Subsequently, the supernatant was transferred to a new tube and it was centrifuged for an additional 10 min at 16,000 RCF with a temperature of 4 °C. cffDNA was extracted again using a QIAsymphony ® DSP Circulating DNA Kit (QIAGEN, Hilden, Germany) and QIAsymphony Automatic Extraction System (QIAGEN, Hilden, Germany) according to the manufacturer’s instructions. The QIAsymphony ® DSP Circulating DNA Kit is based on magnetic-particle technology for the automated isolation and purification of human circulating cffDNA. Furthermore, the QIAsymphony DSP circulating DNA Kit is a ready-to-use system for the qualitative purification of human circulating cell-free DNA from human plasma. Genomic DNA (gDNA) from paternal samples was extracted with a Qiagen DNA Mini Kit. A custom PCR amplification panel was designed through Thermo Fisher Ion Ampliseq Designer ( www.ampliseq.com , accessed on 6 July 2021) using a set of 861 SNVs well-documented on dbSNP . Single nucleotide variant selection was based on the following criteria: all SNVs had to have available population genetics data from dbSNP, 1000 genomes and/or FrogKB databases. We excluded variants labeled as indel (insertion and deletion), Multiple Nucleotide Variants (MNV)/complexes, and those that were pathogenic or likely pathogenic, and we excluded variants in highly repeated regions or in pseudogenes. The resulting SNVs were selected due to having MAF (Minor Allele Frequency) > 0.3 in at least one population, and/or were manually selected to optimize the chance of discriminating between populations to be spread across all the chromosomes (chr). In total, 638 of the 861 selected single nucleotide variants were biallelic and 223 were triallelic in the dbSNP database. The average MAF of the panel excluding triallelic SNVs was 0.321 (median in 0.347) and the number of SNVs per chromosome ranged from 12 to 95, with an average of 36. In particular, the numbers of SNVs for each chromosome were: chr1:41, chr2:61, chr3:36, chr4:95, chr5:34, chr6:60; chr7:25, chr8:74, chr9:33, chr10:26, chr11:35, chr12:33, chr13:24, chr14:31, chr15:33, chr16:35, chr17:39, chr18:23, chr19:12, chr20:29, chr21:17, chr22:17, chrX:23 and chrY:25. The extracted cffDNA from the maternal plasma samples, as well as gDNA from the paternal samples, have been parallelly used for library preparation using the Ion AmpliSeq™ Library Kit PLUS (Thermo Fisher Scientific, Foster City, CA, USA). This kit is engineered for the rapid preparation of amplicon libraries. The Ion AmpliSeq™ Library Kit Plus is an on-plate format to facilitate sample processing, traceability and compatibility with automation. The Ion AmpliSeq™ Library Kit Plus provides high, uniform, reliable and reproducible output. Sequencing maternal cffDNA samples requires much more read depth compared to paternal gDNA samples because the evaluation of the presence of low-frequency alleles in cffDNA samples is necessary to determine the fetal genotype. For this reason, the last step of the library preparation is crucial to balance correctly maternal and paternal samples in the same pool. In fact, in order to pool cffDNA and gDNA amplified samples in order to obtain the proper number of reads, it is very important to load them 20:1 (in terms of nanograms), respectively. The entire pool is then quantified, diluted to 100 pM (parts per million), and finally processed with an Ion Chef™ Instrument for the templating and enrichment procedures. In particular, the Ion Chef™ System reduces sources of user-introduced variability and supports sequencing preparation for the Ion S5™ System. A final 500 flows sequencing has been performed using the Ion 540™ Chip running on the Ion S5™ System (Thermo Fisher Scientific, Foster City, CA, USA). The Ion S5 System is a semiconductor system which allows different sequencing workflows. The Thermo Fisher Scientific S5 sequencing platform automatically performs a set of next generation sequencing reads and quality checks statistics pre- and post-alignment with the hg19 human reference genome. Statistics includes read length histograms, chip-loading-density percentage, total number of mapper and unmapped reads, and similar self-explanatory statistics. The most useful ones are the “on target”, “uniformity”, and “mean depth” statistics from the Thermo Fisher Scientific “coverage analysis” plugin. They represent the alignment statistics for each amplicon of the panel, where the “on target” statistic represents the percentage of reads aligning in correspondence of an amplicon included in the panel; the “uniformity” statistic represents the percentage of the amplicon bases covered by at least 0.2 × the average base reads depth; and the “mean depth” statistic represents the average base coverage depth over all bases targeted in the reference. To be evaluable, we expect samples to have “on target” value > 90%, “uniformity” value > 90%, and “mean depth” > 8000 for cffDNA samples (and > 400 for gDNA samples). The Thermo Fisher Scientific system produces a BAM file for each sample, containing all the aligned reads. This file is exportable from the system and can be submitted for further bioinformatics analysis. In this study, the BAM files obtained from the S5 instrument have been analyzed using the NIPAT-flow data analysis pipeline developed by the Eurofins Genoma Group.
The algorithm evaluates the compatibility of each maternal cell-free fetal DNA sample against each alleged father. The fetal genotypes for each SNV included in the analysis are inferred from the maternal samples. Furthermore, the algorithm robustness has been validated using a set of mock samples generated by simulating 100 biological brothers for each biological father. The algorithm utilizes the BAM files obtained from the next-generation sequencing process, and it produces some intermediate reports (one for each mother vs. alleged father comparison) and a final overall report including the paternity probability (W) for each comparison. The evaluation is straightforward from the Combined Paternity Index (CPI likelihood statistic) adapted to be used in the context of an SNV-based prenatal test. A kinship relationship is universally evaluated by comparing the likelihoods of observing the obtained genotypes given two alternative hypotheses (i.e., the Likelihood Ratio, LR). In the case of paternity testing, it is evaluated whether an individual is related to another individual with a father–son relationship versus the hypothesis that the two individuals are not related. The higher the LR, the more supported is the first hypothesis (paternity). The lower the LR (i.e., <1), the more supported is the second hypothesis (unrelated individuals). For each SNV, the Paternity Index (PI) is classically calculated as a likelihood ratio according to the Bayesian theorem . PI is defined as the ratio between the probability of the fetal genotype to be the observed one (event E) conditioned to the alleged father being the biological father (hypothesis H 1 ) and the same probability where the father is a random individual extracted from the population (hypothesis H 2 ) (PI = Pr(E | H 1 )/Pr(E | H 2 ) . When multiple loci are used to determine paternity, the product of all the individual PI values for each locus is the combined paternity index. PI formulas are adapted to the cases of prenatal tests where fetal genotypes need to be inferred from the maternal cffDNA samples [ , , ]. In particular, this was undertaken by also taking into account technical errors and natural effects (e.g., sequencing errors or ex novo mutations) that could lead to fetal genotype misinterpretation and eventually to a biased PI calculation. The CPI for a couple is then the product of all the PIs—one for each SNV included in the analysis—and paternity probability (W) is calculated as (CPI/CPI + 1)*100. Only SNVs where the mother genotype is homozygous have been included in CPI calculation because for heterozygous maternal positions it is statistically inaccurate to infer the fetal genotype from the maternal cell-free fetal DNA sample only . The algorithm defines the fetal genotype from the maternal cffDNA sample using a set of fetal base thresholds. In fact, for each SNV, if a low-frequency base is detected on the maternal sample, and it is different from the maternal homozygous genotype, the fetal genotype is inferred as heterozygous. In particular, a minimal coverage of 1000 reads for the maternal sample and 100 reads for the alleged father are required for the inclusion in the Paternity Index calculations. A base coverage of at least 100 reads is required to be assigned as fetal for allele characterized by frequency ranging from 1.5% to 15%. In the case of variants located on chromosome Y, the maternal zygosity filter is obviously inapplicable; however, for a coverage > 100 reads, the filter is still applied. In the end, the number of SNVs reporting a low-frequency allele varies among different meiosis samples, ranging from 131 to 173 with an average of 145 SNVs. Multiple checks and features are included in the algorithm to improve the robustness against both human and technological errors. The algorithm performs an assessment of relatedness indexes among all different sample pairs . Some thresholds are set in the algorithm to deal with noise and low coverage. In particular, the algorithm includes a noise reduction method to ensure a more robust call for the fetal base, starting from the mother’s genotype. Fetal genotype calling implements a SNV-specific threshold for the low-frequency alleles which relies on previously collected data of low-frequency alleles on samples without cffDNA. As a support, CPI is also calculated using a different set of thresholds optimized for low-fetal-fraction samples. Robustness of the CPI calculation is also assured by the usage of no less than 30 SNVs reporting a low-frequency allele (1.5–15%).
For each compatible couple, 100 synthetic brothers of the designated father were simulated (for a total of 900 simulate samples) to evaluate the performances of NIPAT-flow on individuals whose genetic profile was closely related to the real biological father. A two-step probabilistic model was designed to define the synthetic sample’s genotype for each SNV. Each father’s brother is then a sample drawn from this model. Given the designated father, a couple of synthetic parents was first sampled from an inferred probability distribution and then a synthetic son of theirs was generated using the equiprobable combination of their genotypes. In more detail, a Bayesian approach was used to infer the parents’ genotypes. For each SNV, the vector of the MAFs values was taken as the prior probability distribution. This quantity was updated considering the profile of the designated father, using the likelihood of its genotype conditioned to his parents’. The normalized product of these two quantities is a posterior probability distribution from which the genotypes of the parents are sampled. This probabilistic model allows us to create individuals sharing a major part of their genetic profile with the designated father. Defining concordance between two individuals as the percentage of SNVs showing an identical genotype over the total number of SNVs, the brothers showed, on average, a concordance of 68.6% with the designated father. This percentage varies equally both across the different brothers and different individuals (64.0–72.8%). As expected, the concordance between unrelated individuals was lower, ranging from 18.9% to 56.4% (average 36.1%).
To evaluate the reliability and robustness of NIPAT, couples originating from the maternal samples and biological fathers ( n = 9), unrelated fathers ( n = 72), and simulated brothers ( n = 900) were tested by comparing the log(CPI) distributions between groups. At first, the log(CPI) parent distribution was tested for normality using a Shapiro–Wilk test and evaluated with a skewness–kurtosis plot for empirical distribution . This plot combines information about skewness and kurtosis, which are measures of the shape of a distribution. A skewness–kurtosis plot can help decide whether a parametric or non-parametric statistical test is appropriate by determining whether the distribution of the data is normal or non-normal. Bootstrapping was used (nboot = 100) to test the stability of the skewness and kurtosis statistics when data were resampled, finally ensuring evaluation reliability. As the distribution of log(CPI) was identified as non-normal, a non-parametric statistical test was chosen, considering that the inappropriate use of a parametric test would have given biased results. A Kruskal–Wallis rank sum test was used to test if there was a difference in log(CPI) between biological fathers, unrelated couples, and simulated brothers. Post hoc tests were then performed with the two-tailed Wilcoxon test for two independent samples and p-values were corrected for multiple testing using the false discovery rate correction. We set α = 0.0001, corresponding to a confidence level (1 − α) of 99.99%. This highly restrictive confidence level was set to reduce the risk of incurring a false positive result. Moreover, to further confirm our result, we decided to compute a 99.99% confidence interval for the true mean of the simulated brothers log(CPI) distribution. The statistical analysis was performed with the R programming language .
Here, we present a non-invasive prenatal paternity test (NIPAT) using cffDNA based on Ion S5 NGS technology. A custom PCR amplification panel consisting of 861 SNVs has been developed on the basis of MAF and the absence of correlation with human phenotypes. A number of nine pregnant women and their partners were recruited to test the performance of NIPAT. In particular, maternal peripheral blood samples were utilized for the extraction of cell-free fetal DNA as a source of fetal material for the NIPAT workflow. Informative SNVs used for CPI calculations were selected based on the mother genotype. The number of SNVs reporting a maternal homozygous genotype and a second low-frequency allele in our cohort ranges from 131 to 173, with an average of 145 SNVs. Log(CPI) values calculated for designated fathers showed ranges between +34 and +85, whereas log(CPI) values calculated for unrelated individuals were below −150 (full data available in ). In order to evaluate the performances of NIPAT on individuals whose genetic profile was closely related to the biological father, for each compatible couple, 100 synthetic full brothers of the designated father were simulated (for a total of 900 simulate samples). Log(CPI) values calculated for simulated full brothers of the designated fathers ranged between −30 and −200 with an average of −108 ( ). These values are still very far from the biological father log(CPI) value. These data truly support the robustness of the NIPAT-flow test. Finally, the performances of NIPAT were fairly concordant with the paternity compatibility threshold (log(CPI) > + 4) and paternity exclusion threshold (log(CPI) < −4) suggested by the reference guidelines and reference literature for SNV approaches . To assess the differences between the distributions of log(CPI) resulting from real fathers, unrelated couples, and simulated full brothers, we first evaluated the parent distribution of log(CPI). A Shapiro–Wilk normality test showed that the distribution was strongly non-normal (W = 0.83711, p -value < 0.0001), as also shown in the skewness–kurtosis plot, and noticeable in the density plot ( ). Therefore, we used a non-parametric approach to hypothesis testing. A Kruskal–Wallis rank sum test showed that there was a statistically significant difference in log(CPI) between real fathers, unrelated couples, and simulated full brothers (χ 2 = 221.55, df = 2, p -value < 0.0001). Post hoc Wilcoxon tests reported a significant difference in all the comparisons ( ). The log(CPI) of the simulated brothers was significantly different from that of the biological fathers (W = 8100, p -value = 2.38 × 10 −7 , p -value adjusted = 3.57 × 10 −7 , 99.99% CI [−203.3075, −134.5006]). Both the lower and upper CI bounds were negative, meaning that the true mean of the simulated brothers cannot overlap with that of the biological fathers. In fact, the estimated difference in location of the means is 166.6658. The log(CPI) of the unrelated couples was significantly different from that of the biological fathers (W = 648, p -value = 1.146 × 10 −6 , p -value adjusted = 1.146 × 10 −6 , 99.99% CI [−359.8096, −243.4905]). The log(CPI) of the simulated brothers was significantly different from that of the unrelated couples (W = 64691, p -value = 4.559 × 10 −45 , p -value adjusted = 1.367 × 10 −44 , 99.99% CI [−156.0940, −121.1659]). As the NIPAT-flow algorithm reported zero chances of obtaining a log(CPI) value greater than 0 for the simulated brothers and for the unrelated couples, we conclude that it can be used to flawlessly identify biological fathers. Such a high statistical significance in testing biological father vs. simulated brothers and vs. unrelated couples’ log(CPI) values ensures that NIPAT is flawlessly reliable in detecting the biological father.
Here, we present a Non-Invasive Prenatal Paternity Test (NIPAT) using cffDNA based on Ion S5 NGS technology that is already used in many laboratories for forensic NGS protocols. A custom PCR amplification panel consisting of 861 SNVs has been developed on the basis of MAF. The average MAF of the panel is 0.321 (median in 0.347), while the number of SNVs per chromosome ranges from 12 to 95 (with an average of 36). The algorithm was tested on a number of nine pregnant women and their partners. NIPAT generated log(CPI) values for designated fathers ranging from +34 to +85, whereas log(CPI) values calculated for unrelated individuals were less than −150. This difference in log(CPI) values between these two groups demonstrates the robustness of NIPAT, making it an extremely reliable tool for determining paternity with a high degree of confidence. One of the main challenges for paternity testing is the ability to distinguish two possible fathers when they are biologically related. It is not particularly rare that two possible fathers can be related and, therefore, share many DNA variants. Thus, getting conclusive results for a paternity test may be challenging using traditional short tandem repeats (STR)-based methods. Full brothers share 50% of their DNA and represent a typical case of disputed paternity between related putative fathers. To evaluate the robustness of NIPAT, we calculate log(CPI) for 100 simulated full brothers of each biological father. More than 900 meiosis samples were analyzed, and log(CPI) values were compared between biological fathers and 100 virtual full brothers. Log(CPI) values were calculated for simulated full brothers of the designated fathers, ranging between −30 and −200 with an average of −108. The difference between log(CPI) values for designated fathers and simulated full brothers was very high, and these two distributions never overlapped. Thus, this means that the chance of incurring a false positive is approximately 0, meaning that the NIPAT test is robust with high rates of shared DNA. Finally, the performances of NIPAT are fairly concordant with the paternity compatibility threshold log(CPI) > + 4 and the paternity exclusion threshold log(CPI) < −4 suggested by reference guidelines and reference literature for SNV approaches . To date, there are no universally accepted thresholds for the confirmation of paternity, exclusion, or for inconclusive results for both STRs- and NGS-based methods. Accredited laboratories are expected to establish an internal range for inconclusive results, with such values dependent on the methods, the validation studies, the number of SNPs, etc. We would like to outline that NIPAT showed differences in terms of log(CPI) between designated fathers and simulated full brothers that were very striking and unimaginable with conventional analysis. Generally accepted ranges for inconclusive cases are 10 −2 < LR < 10 2 , or 10 −4 < LR < 10 4 . The CPI values calculated for designated fathers ranged between 10 +34 and 10 +85 , whereas the CPI values calculated for unrelated individuals were below 10 −150 . As a stress test, CPI values calculated for simulated full brothers of the designated father ranged between 10 −30 and 10 −200 , with an average of 10 −108 . These data confirm that a genomic approach, analyzing hundreds of variants based on next-generation sequencing, can represent an opportunity for paternity testing compared to traditional methods based on STR typing. The ability to interpret the sequence of hundreds/thousands of SNVs allows discriminating powers unimaginable only a few years ago. Non-invasive prenatal paternity tests using cell-free fetal DNA need to analyze hundreds of genetic variants, and NGS technology and statistical approaches are mature enough to support robust methods ensuring the correctness of results. We believe that these data strongly support the robustness of the NIPAT-flow test, representing an interesting approach for scientists working in the field.
|
Artificial intelligence-based chatbot assistance in clinical decision-making for medically complex patients in oral surgery: a comparative study | 1397640c-6a8e-40a7-8ffe-471ddceb29e0 | 11887094 | Dentistry[mh] | The field of oral surgery presents unique challenges, particularly when managing medically complex patients . Effective treatment of these cases requires a comprehensive understanding of patients’ medical histories to ensure safe surgical procedures, appropriate management of complications,, and a smooth recovery. Medical consultations among healthcare providers are essential for achieving optimal patient outcomes, as effective communication between professionals plays a critical role . However, research indicates that interprofessional communication is often suboptimal, leading to disruptions in care continuity, diagnostic delays, excessive medication use, unnecessary testing, reduced healthcare quality, wasted time, and increased financial costs . The healthcare industry is undergoing a significant transformation driven by the rapid advancement of artificial intelligence (AI) technologies . Traditional, time-consuming, and observer-dependent tasks are increasingly being replaced by AI-based approaches, which can match or even exceed human accuracy . Artificial intelligence (AI)-based chatbots, also known as large language models (LLMs), are advanced software applications that rely on several key technologies to function effectively. Natural Language Processing (NLP) enables chatbots to understand and interpret human language, while Machine Learning (ML) allows them to improve their responses over time by learning from interactions . Using NLP algorithms, chatbots engage in human-like conversations, interpret user queries, and provide immediate responses . The capacity of AI-based chatbots to provide valuable medical information has made them increasingly appealing to both patients and physicians. By delivering precise, real-time answers, they offer a significant advantage over traditional online resources, enhancing their popularity and fostering user trust . Researchers suggest that chatbots could become valuable tools for medical professionals in the future and help alleviate the burden on healthcare systems . While AI chatbots have been studied in the context of patient education their effectiveness in assisting healthcare professionals in clinical decision-making —particularly in oral and maxillofacial surgery (OMFS) —remains underexplored. It is still uncertain whether these chatbots can consistently offer reliable information to healthcare professionals and assist them in making informed clinical decisions . Given the complexity of managing medically compromised patients in oral surgery, it is crucial to assess whether AI-based chatbots can provide reliable, evidence-based guidance for clinicians. This study aimed to investigate whether ChatGPT-3.5 and Claude-instant can serve as reliable sources of medical information and also explores their potential to assist professionals in clinical decision-making. By addressing the gap in the literature, this research contributes to the ongoing discussion about the role of AI-driven tools in clinical practice. The null hypothesis was that the chatbots would perform comparably in terms of accuracy, completeness, and quality when providing information on oral surgery for medically complex patients.
Ethical approval This study did not involve human or animal subjects; therefore, ethical approval was not required, consistent with previous studies . Sample size and study design The study was designed as an analytical cross-sectional observational study, following the STROBE checklist, similar to previous research . The sample size estimation was performed using the G*Power 3.1.9.2 software (University of Düsseldorf , Düsseldorf , NRW , Germany). The following parameters were considered: (a) test power of 0.8, (b) significance level of 0.05, and (c) effect size of 0.25. Based on these standards, the minimum sample size required was 34 for reliability analyses and 47 per group for difference analyses. Question development The study aimed to evaluate the reliability of chatbots and assess the quality, accuracy, and completeness of their responses to specific medical questions. To achieve this, a pool of questions was created, similar to previous studies . Three experienced volunteer oral and maxillofacial surgeons (Surgeons A, B, and C, with 10, 12, and 17 years of experience, respectively), acting as content experts developed the questions de novo . The developers were instructed to ensure that the questions met the following criteria: they should be single-focused, clear, and easy to understand; reflect real-world situations; be written in a scientific manner; and be relevant to the field of OMFS. Relevant literature was identified through a comprehensive search process focusing on systemic diseases and common conditions that typically require professional consultation or may raise concerns during oral surgery. The search terms included specific keywords related to oral and maxillofacial surgery, systemic diseases, and common conditions encountered in this field. These terms included but not limited to: ‘oral surgery,’ ‘systemic diseases and oral surgery,’ ‘prevalence of systemic diseases in oral surgery’, ‘oral health and systemic conditions,’ ‘dental management considerations’, ‘oral surgery complications,’ ‘oral surgery risk factors,’ and ‘oral surgery patient management.’ The terms were used in various combinations across databases such as PubMed, Scopus, and Google Scholar to ensure the selection of evidence-based and clinically relevant topics for the development of the questions. A total of 89 questions were developed. To assess the validity of these questions, each one was evaluated by 10 volunteer oral and maxillofacial surgeons using Lawshe’s Content Validity Index (CVI), a widely recognized method for establishing content validity . This method helps determine whether to retain or reject individual items. Experts rated each question as “essential,” “useful but not essential,” or “not necessary.” These ratings were then converted into a quantitative ratio known as the Content Validity Ratio (CVR), using the formula: [12pt]{minimal}
$$\:=_{e}-}{}$$ where [12pt]{minimal}
$$\:{n}_{e}$$ is the number of experts who rated the item as ‘essential’ and [12pt]{minimal}
$$\:N$$ is the total number of experts. The critical CVR value is 0.62 for 10 raters at a 0.05 significance level . Hence, questions with a CVR value of ≥ 0.62 were selected for inclusion. As a result, 64 open-ended, clinically relevant questions that requires text-based responses were included. Each question was framed with the prompt: “How would you respond to the following question if you were a doctor?” The examples of the questions are shown in Table . Data collection. Two chatbots were selected for evaluation: ChatGPT 3.5 (Open AI, San Francisco, USA) and Claude-instant (Anthropic, San Francisco, USA). Access to chatbots was provided online with a new account created in February 2024 for the study. A new chat window was opened for each question to minimize the influence of prior responses. All questions and responses were in English. The identical set of questions was administered in two sessions, one week apart. In each session, the questions were asked consecutively to the chatbots, and their responses were recorded simultaneously. The content generated by the chatbots was used solely for research purposes. The questions and responses are provided as supplementary data. [Supplementary materials] Chatbot evaluation The raters for this study were two oral and maxillofacial surgeons, each with 12 and 15 years of experience, respectively. They were not involved in developing the questions, ensuring unbiased evaluations. The number of raters was determined to maximize reliability and agreement levels . To minimize potential bias, inter-rater agreement was assessed using Cohen’s kappa coefficient. The kappa values were interpreted as follows: ≤0 indicates no agreement; 0.01–0.20 indicates none to slight agreement; 0.21–0.39 indicates fair agreement; 0.40–0.59 indicates weak agreement; 0.60–0.79 indicates moderate agreement; 0.80–0.90 indicates strong agreement; and > 0.90 indicates almost perfect agreement . The raters were blinded to the identity of the chatbots. Consensus scores for each answer were determined based on practical clinical knowledge and PubMed, as described in a previous study . The quality was assessed using a modified DISCERN tool (mDISCERN) as in previous studies . DISCERN is a validated tool for assessing the quality of written consumer health information on treatment options . Details of the mDISCERN scoring are provided in Table . Accuracy and completeness were assessed using a Likert scale (Table ). Through an internal validation process, answers that received lower ratings were subjected to retesting after 12–14 days. Answers rated below 3 points for accuracy were not evaluated for completeness. To assess each chatbot’s reliability, the Intraclass Correlation Coefficient (ICC) was used, consistent with a previous study . The flowchart of the methodology is shown in Fig. . Statistical analysis Skewness and kurtosis coefficients were calculated to examine the normal distribution of the data. Mann-Whitney U Test was used to compare the chatbots chosen due to the non-normal distribution of data. Intraclass correlation (ICC, 95CI%) was used to evaluate intrarater agreement of the chatbots. All analyses were performed using SPSS for Windows (release 21.0, SPSS Inc.), with a 5% significance level.
This study did not involve human or animal subjects; therefore, ethical approval was not required, consistent with previous studies .
The study was designed as an analytical cross-sectional observational study, following the STROBE checklist, similar to previous research . The sample size estimation was performed using the G*Power 3.1.9.2 software (University of Düsseldorf , Düsseldorf , NRW , Germany). The following parameters were considered: (a) test power of 0.8, (b) significance level of 0.05, and (c) effect size of 0.25. Based on these standards, the minimum sample size required was 34 for reliability analyses and 47 per group for difference analyses.
The study aimed to evaluate the reliability of chatbots and assess the quality, accuracy, and completeness of their responses to specific medical questions. To achieve this, a pool of questions was created, similar to previous studies . Three experienced volunteer oral and maxillofacial surgeons (Surgeons A, B, and C, with 10, 12, and 17 years of experience, respectively), acting as content experts developed the questions de novo . The developers were instructed to ensure that the questions met the following criteria: they should be single-focused, clear, and easy to understand; reflect real-world situations; be written in a scientific manner; and be relevant to the field of OMFS. Relevant literature was identified through a comprehensive search process focusing on systemic diseases and common conditions that typically require professional consultation or may raise concerns during oral surgery. The search terms included specific keywords related to oral and maxillofacial surgery, systemic diseases, and common conditions encountered in this field. These terms included but not limited to: ‘oral surgery,’ ‘systemic diseases and oral surgery,’ ‘prevalence of systemic diseases in oral surgery’, ‘oral health and systemic conditions,’ ‘dental management considerations’, ‘oral surgery complications,’ ‘oral surgery risk factors,’ and ‘oral surgery patient management.’ The terms were used in various combinations across databases such as PubMed, Scopus, and Google Scholar to ensure the selection of evidence-based and clinically relevant topics for the development of the questions. A total of 89 questions were developed. To assess the validity of these questions, each one was evaluated by 10 volunteer oral and maxillofacial surgeons using Lawshe’s Content Validity Index (CVI), a widely recognized method for establishing content validity . This method helps determine whether to retain or reject individual items. Experts rated each question as “essential,” “useful but not essential,” or “not necessary.” These ratings were then converted into a quantitative ratio known as the Content Validity Ratio (CVR), using the formula: [12pt]{minimal}
$$\:=_{e}-}{}$$ where [12pt]{minimal}
$$\:{n}_{e}$$ is the number of experts who rated the item as ‘essential’ and [12pt]{minimal}
$$\:N$$ is the total number of experts. The critical CVR value is 0.62 for 10 raters at a 0.05 significance level . Hence, questions with a CVR value of ≥ 0.62 were selected for inclusion. As a result, 64 open-ended, clinically relevant questions that requires text-based responses were included. Each question was framed with the prompt: “How would you respond to the following question if you were a doctor?” The examples of the questions are shown in Table .
Two chatbots were selected for evaluation: ChatGPT 3.5 (Open AI, San Francisco, USA) and Claude-instant (Anthropic, San Francisco, USA). Access to chatbots was provided online with a new account created in February 2024 for the study. A new chat window was opened for each question to minimize the influence of prior responses. All questions and responses were in English. The identical set of questions was administered in two sessions, one week apart. In each session, the questions were asked consecutively to the chatbots, and their responses were recorded simultaneously. The content generated by the chatbots was used solely for research purposes. The questions and responses are provided as supplementary data. [Supplementary materials]
The raters for this study were two oral and maxillofacial surgeons, each with 12 and 15 years of experience, respectively. They were not involved in developing the questions, ensuring unbiased evaluations. The number of raters was determined to maximize reliability and agreement levels . To minimize potential bias, inter-rater agreement was assessed using Cohen’s kappa coefficient. The kappa values were interpreted as follows: ≤0 indicates no agreement; 0.01–0.20 indicates none to slight agreement; 0.21–0.39 indicates fair agreement; 0.40–0.59 indicates weak agreement; 0.60–0.79 indicates moderate agreement; 0.80–0.90 indicates strong agreement; and > 0.90 indicates almost perfect agreement . The raters were blinded to the identity of the chatbots. Consensus scores for each answer were determined based on practical clinical knowledge and PubMed, as described in a previous study . The quality was assessed using a modified DISCERN tool (mDISCERN) as in previous studies . DISCERN is a validated tool for assessing the quality of written consumer health information on treatment options . Details of the mDISCERN scoring are provided in Table . Accuracy and completeness were assessed using a Likert scale (Table ). Through an internal validation process, answers that received lower ratings were subjected to retesting after 12–14 days. Answers rated below 3 points for accuracy were not evaluated for completeness. To assess each chatbot’s reliability, the Intraclass Correlation Coefficient (ICC) was used, consistent with a previous study . The flowchart of the methodology is shown in Fig. .
Skewness and kurtosis coefficients were calculated to examine the normal distribution of the data. Mann-Whitney U Test was used to compare the chatbots chosen due to the non-normal distribution of data. Intraclass correlation (ICC, 95CI%) was used to evaluate intrarater agreement of the chatbots. All analyses were performed using SPSS for Windows (release 21.0, SPSS Inc.), with a 5% significance level.
The chatbots provided one response to each question. Each question was administered to 2 chatbots across 2 separate sessions, 1 week apart, resulting in a total of 128 responses (64 questions × 2 sessions × 1 response per session per chatbot). The majority of answers were rated as high quality, with 86% ( n = 55/64) and 79.6% ( n = 51/64) of responses from ChatGPT in sessions 1 and 2, respectively, receiving scores of 5. For Claude-instant, 81.25% ( n = 52/64) and 89% ( n = 57/64) of responses were rated as high quality in sessions 1 and 2, respectively. In terms of accuracy, most answers were rated as completely correct (scores of 4 or above). ChatGPT had 92% ( n = 56/61) and 93.4% ( n = 57/61) of responses rated as completely correct in sessions 1 and 2, respectively. Claude-instant had 95.2% ( n = 60/63) and 89% ( n = 57/64) of responses rated as completely correct in sessions 1 and 2, respectively. Regarding completeness, most answers were rated as adequate or comprehensive (scores of 2 or above). ChatGPT had 88.5% ( n = 54/61) and 86.8% ( n = 53/61) of responses rated as adequate or comprehensive in sessions 1 and 2, respectively. Claude-instant had 95.2% ( n = 60/63) and 86% ( n = 55/64) of responses rated as adequate or comprehensive in sessions 1 and 2, respectively. Responses to medication-related osteonecrosis of the jaws (MRONJ)-related questions (Q62-64) received the lowest accuracy scores. The scores for quality, accuracy, and completeness from both chatbots across two sessions are summarized as mean [SD] and median [IQR] in Table . Both chatbots showed high consistency in quality across both sessions. In terms of completeness, they exhibited moderate consistency in each session (Table ). When comparing the chatbots, no statistically significant differences were found in accuracy and completeness. However, ChatGPT showed significantly higher performance in terms of quality in the first session (Table ). The inter-rater agreement was assessed using Cohen’s kappa test, yielding a kappa coefficient (95% CI) of 0.736, indicating a good level of agreement between the two raters.
This study assessed the potential of two AI-based chatbots to assist professionals in clinical decision-making for medically complex oral surgery patients. AI chatbots can vary widely in their performance due to differences in algorithms, datasets, training methods, and design objectives . In this study, the chatbots were selected based on specific criteria, prioritizing ease of access and free subscription. ChatGPT 3.5 was chosen as a pioneering large language model (LLM) with over 100 million users since its release in November 2022 . For comparison, Claude-instant, introduced in March 2023, was selected as a representative of Constitutional AI—a novel alignment strategy focused on context-aware responses aligned with human values . The findings indicate that both chatbots performed similarly in terms of accuracy and completeness. However, ChatGPT received significantly higher quality scores than Claude-instant ( p <.001), leading to the rejection of the null hypothesis. One of the study’s main strengths is that, to the best of our knowledge, it is the first in the field of oral surgery to compare the performance of two different AI-based chatbots across two separate sessions. Additionaly, existing literature primarily focuses on chatbots responding to patient queries, with evaluations centered on patient needs. However, in these studies the assessment were made by professionals. In contrast, this study involves professionals assessing the chatbots’ performance specifically for professional use, providing early evidence on the reliability of chatbots in delivering qualified, accurate, and comprehensive information for clinical decision-making while highlighting potential limitations in AI-generated medical content. The practical application of AI-based chatbots in clinical settings is diverse and can enhance multiple aspects of healthcare delivery. Several studies explore the potential of chatbots to seamlessly integrate into existing workflows, enhancing patient interactions, streamlining administrative tasks, and supporting clinical decision-making. For example, if chatbots can serve as supplementary tools in clinical settings to triage patients and conduct initial assessments provide personalized health advice and function as auxiliary assistants in clinical environments . AI chatbots, such as ChatGPT, can extract information from unstructured data sources like electronic health records, identify patterns and recurring symptoms, and generate diagnostic reports. By automating these tasks, they have the potential to reduce the workload of frontline healthcare workers during routine medical checks. This could, in turn, help alleviate healthcare worker shortages and improve overall efficiency in clinical settings . Despite these promising applications, regulatory considerations for AI-based chatbots in patient care are essential. Ensuring patient safety, data protection, and transparency is critical. There is also a need for clear guidelines regarding liability and accountability, especially in cases where erroneous or harmful advice is provided. Furthermore, continuous monitoring and quality assurance are necessary to ensure that these AI systems remain effective and up-to-date with evolving clinical standards. Several issues need to be addressed regarding the use of chatbots as a source of medical information. One important factor to consider is the formulation of questions, which significantly affects chatbot responses . Studies have employed various formats, including multiple-choice questions , open-ended questions as used in this study or a combination of both . Open-ended better capture the nuances of medical decision-making . However, to ensure standardization, it is essential to structure them consistently. For instance, Wilhelm et al. employed a straightforward pattern, framing questions as “How to treat.?“ , while Azadi et al. prefaced them with, “What would your response be to the following question if you were an oral and maxillofacial surgeon?“. Building on previous studies , a pool of open-ended questions was developed de novo in this study, and a structured framework was applied to ensure they accurately reflected the complexities physicians face in clinical practice. Care was taken to maintain consistency in question development, with the goal of standardizing the evaluation process and eliciting detailed responses. Another significant concern about AI-driven information is the variability in chatbot responses to identical questions. Sanmarchi et al. reported that the responses can vary when the same questions is repeated, reflecting the nature of ML algorithms . Most studies have posed each question only once. To address this variety, questions were posed in two separate sessions in this study. One-week duration was selected to minimize memory bias for both raters and chatbots. This waiting period was intended to better simulate a real-life scenario, where patients typically experience some time between consultations. Furthermore, through an internal validation process, the answers that received lower ratings were rated after 12–14 days. In this regard, the current study aligns with existing research. Several studies in the literature have re-evaluated chatbot-generated responses. Onder et al. tested each question twice on different days for variation in answers but did not specify the duration between tests. In studies by Wilhelm et al. and Goodman et al. re-evaluations occurred between 8 to 17 days. In this study, internal consistency showed almost perfect agreement in quality for both chatbots, though completeness exhibited moderate agreement. In addition, all responses obtained were relevant and generated within seconds, but some were broad or non-specific, while others were detailed. For example, in Q9: Is it safe to perform oral surgery on patients taking Coumadin?, ChatGPT included the International Normalized Ratio (INR) value in its response. In Q34:Can oral surgery be performed on patients receiving radiotherapy to the head and neck region?, Both chatbots provided a detailed answer, mentioning hyperbaric oxygen therapy and antibody prophylaxis. In Q22:Is hematocrit level important for performing oral surgery?, Claude-instant provided a specific numeric value for the hematocrit level, whereas ChatGPT did not. Chatbots’ responses generally indicated that oral surgery in patients at risk for infective endocarditis should be approached with caution, recommending prophylactic antibiotic use in line with established guidelines . However, upon closer examination, while the chatbots acknowledged risk factors, they occasionally oversimplified the decision-making process. This simplification may have overlooked critical nuances, such as the specific dental procedure being performed or the presence of patient comorbidities, both of which are key considerations when making informed clinical decisions. In another example, chatbot responses on the management of leukemia patients emphasized the need for multidisciplinary care, including attention to immunosuppression and bleeding risks. This aligns with existing literature that underscores the complexity of surgical interventions in immunocompromised patients . However, the chatbots’ responses sometimes lacked depth, particularly regarding the importance of preoperative hematological assessments or the specific timing of surgical interventions in relation to chemotherapy cycles. These factors are essential in clinical decision-making and were not sufficiently addressed in the chatbot-generated responses, highlighting a gap in their clinical applicability. Notably, responses with the lowest accuracy were specifically related to MRONJ, likely due to its evolving status in OMFS. We observed that AI chatbots struggle to provide accurate interpretations without specialized training, particularly in areas where personalized information and human judgment are essential. This result is consistent with the study by Suárez et al., which was also conducted in oral surgery and shares a similar methodology with this study. The researchers reported that ChatGPT, by its nature, does not specify the sources of its information and cannot access recently updated documents. This finding underscores the current limitations of AI-based chatbots in handling specific medical topics and highlights the need for continuous updates and training to improve their reliability. The ethical risks associated with AI-generated medical content must be carefully considered, as misinformation could compromise patient safety. The use of non-specialized training data, the potential for outdated information, and ethical and legal concerns regarding patient confidentiality necessitate thorough evaluation . Goodman et al. evaluated ChatGPT’s responses to medical queries from 33 physicians across 17 specialties and found that, while ChatGPT generally provided accurate information, it occasionally made unwarranted assumptions . This phenomenon, known as “hallucination,” refers to the generation of scientifically incorrect content. It occurs when a chatbot provides seemingly reliable but inaccurate answers, posing a serious concern due to the potential for misinformation in clinical settings. The real danger of these “made up facts” is that they often appear scientifically plausible, making them particularly misleading . Chow et al. suggested that if ChatGPT were professionally trained, it could operate more efficiently, access larger datasets, and help reduce medical errors . However, the dynamic and continuously evolving nature of AI learning makes it challenging to ensure the credibility of the information generated by AI models . Accurate medical information is critical for patient health, and medicine cannot rely on tools that occasionally provide incorrect answers, even if such instances are infrequent . In this study, no instances of hallucination were observed; however, this finding should be interpreted with caution. The controlled study design may have played a role in the absence of hallucinations. In OMFS, several studies have investigated the information provided by chatbots. Balel conducted a study evaluating the usability of ChatGPT in OMFS by assessing the quality of patient information and educational content produced by ChatGPT. Commonly asked patient-questions about OMFS procedures, as well as technical questions for training purposes, were posed to the chatbot. The responses were evaluated by 33 academic maxillofacial surgeons. The study reported that, despite concerns about its safety in educational contexts, ChatGPT demonstrates significant potential as a valuable tool for patient information in OMFS . Similarly, Acar compared the effectiveness of 3 AI-based chatbots (ChatGPT, Microsoft Bing, Google Bard) regarding the information they provide to patients. Twenty questions related to oral surgery complications were posed to each chatbot, and 10 oral surgeons evaluated the responses for accuracy and completeness. ChatGPT provided both more accurate and understandable answers compared to the other two platforms . Jacobs et al. evaluated the accuracy and readability of AI-generated responses to common patient questions regarding third molar extraction, specifically using ChatGPT. They reported that ChatGPT provided largely accurate information, though with some minor inaccuracies . The present study yielded similar results, with ChatGPT receiving higher quality scores. This may be attributed to ChatGPT being the large language model (LLM) with the largest user base worldwide. The advancement of LLMs in generating knowledge is largely due to their continuous training on extensive text data and a feedback loop mechanism through which they learn from corrections and user interactions. However, these studies have primarily focused on assessing the content for patients. Consequently, the potential of AI-based chatbots to deliver valuable insights to healthcare professionals remains an underexplored area. A study similar to the present one was conducted by Azadi et al., who evaluated the accuracy of chatbot responses to clinical decision-making questions in OMFS using the Global Quality Scale (GQS) . Their study assessed Google Bard, GPT-3.5, GPT-4, Claude-instant, and Bing by presenting them with 50 case-based questions prepared by 3 oral and maxillofacial surgeons. These questions were designed in both multiple-choice and open-ended formats, specifically focusing on OMFS-related topics. While the chatbots performed relatively well in answering open-ended questions, the study concluded that they are not yet reliable advisors for clinical decision-making due to significant inaccuracies in their responses. Additionally, the researchers noted a preference for asking open-ended questions rather than multiple-choice ones when using these AI tools. Given the similarities in methodology, this study also adopted an open-ended question format to better reflect real-world usage and assess the quality of chatbot-generated responses in a clinical context. In comparison to existing literature on AI in clinical decision-making, these findings suggest that while chatbots may be able to provide useful guidance in general terms, they often fall short in capturing the full complexity of clinical scenarios. This limitation is important to consider when evaluating the potential for chatbots to be integrated into real-world clinical settings, where nuanced decision-making is frequently required. The study has several limitations. First, it was conducted at a single center and evaluated by only two experts. Although this approach was chosen to ensure maximum reliability and consensus, as supported by the literature , using only two expert raters may introduce confirmation bias, as their assessments might be influenced by preexisting expectations or familiarity with clinical guidelines. Furthermore, the single-center design limits the diversity of expert opinions, and multicenter studies with a larger number of evaluators could provide more comprehensive insights. In this context, potential biases related to the evaluation process should be acknowledged. Another concern is the inherent risk of bias arising from the chatbot training data, which may lead to systemic biases in the generated responses. The data used to train the chatbot may contain biases or imbalances that reflect the views, demographics, or limitations present in the original sources. Since the chatbot learns from this training data, any existing bias—such as the underrepresentation of certain groups, stereotypes, or outdated information—can be incorporated into and reproduced in its responses. This can result in systematic errors or skewed information, which may affect the quality and fairness of the chatbot’s output in real-world applications. The present study, like most chatbot studies, was conducted in English and yielded similar results . To our knowledge, only one study by Soto-Chávez et al., has evaluated ChatGPT’s performance in Spanish, and reported that while ChatGPT can be a reliable source of information for Spanish-speaking patients, its readability and accuracy vary across languages . Numerous AI-based chatbots are available today, including those specifically designed for medical purposes. However, this study focused solely on two general-purpose chatbots, chosen for their free accessibility, ease of use, and widespread recognition. This selection may restrict the broader applicability of the findings and does not account for potential variability among other AI models. The field of AI is rapidly evolving, and the quality, accuracy, and completeness of chatbot responses may improve with subsequent model updates. At the time of the study, the most advanced iterations of these chatbots were available only in a limited number of countries and required a paid subscription. However, evidence has been presented suggesting that there is no significant difference in the quality of medical content generated by ChatGPT 3.5 and ChatGPT 4 . Based on this evidence, it was decided to proceed with ChatGPT 3.5 for this study. Further research exploring the performance of newer iterations of the model could make a valuable contribution to the literature. The limited number of questions included in the study may not fully capture the breadth of clinical scenarios encountered in practice. Future research that incorporate a broader and more discriminating set of questions can better assess the capabilities and limitations of AI tools in diverse clinical contexts. Despite these limitations, this study is novel in both its aim and methodology. It enhances the existing literature on AI-based chatbots in oral and maxillofacial surgery by evaluating the quality of medical content intended for professionals.
In conclusion, this study underscores the potential of AI-based chatbots to support professionals in clinical decision-making for medically complex patients undergoing oral surgery. It also highlights the necessity for ongoing advancements in AI-generated content to ensure patient safety and deliver high-quality, reliable, and accurate information. Further research is needed to assess the evolution of these tools over time, addressing the dynamic nature of machine learning algorithms and their limitations. Although they are currently insufficient as a sole source of information, AI-based chatbots continue to develop and offer a promising solution to the growing demand for medical care. Their potential to enhance the efficiency and effectiveness of healthcare, could help alleviate the workload of healthcare professionals, reduce costs, and save time.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Sex-selective abortions over the past four decades in China | 5892e5ae-bcd4-4a1d-a881-5d3c1d1da7e8 | 11846193 | Surgical Procedures, Operative[mh] | Induced abortion has been a great concern in China. According to the official statistics, the annual number of registered induced abortions increased from 5 million in the early 1970s to a peak of 14.37 million in 1983, then fluctuated between 10 and 14 million for a decade. Since 1993, the annual number has fallen below 10 million and was 8.96 million in 2020 . Induced abortions fall into three categories in China, namely voluntary induced abortions, involuntary induced abortion as a submission to the family planning policy, and sex-selective induced abortions which are interwoven with the first two categories. For voluntary induced abortion, China legalized induced abortions to satisfy the demand for voluntary control over excess births as early as in the 1950s . After 2000, small-scale surveys concerning young women indicate that young women under 25 years old and unmarried women accounted increasingly for induced abortions . For involuntary induce abortion, has been mainly linked to and complicated by China’s birth control policy over the past decades . Ever since the early 1980s, forced abortions have been prevalent in the implementation of the birth control policy to help family planning cadres to achieve their quotas, and compliance with the family planning was a prominent reason for induced abortions . In many official provincial family planning regulations, induced abortion was introduced as a remedial measure for out-of-quota pregnancies following the guidelines of the national decree . However, with the currently available data, it is difficult to estimate the number of involuntary abortions. Sex-selective induced abortion has been reported since the early 1980s with the strict implementation of the one-child-per-couple policy and the availability of sex identification technology . The decline in fertility, either spontaneously due to socioeconomic development or involuntarily due to compliance with the family planning policy, increased the pressure for sex selection in the context of son preference . The introduction of ultrasound B machines and the availability of sex identification technology made the abortion of female fetuses widespread in China. Since sex-selective abortion has been prohibited, the extent to which sex-selective induced abortions are practiced is subject to debate, and the actual number of selective abortions is impossible to obtain, it can only be estimated . An estimate made three decades ago claims that, even if the abortion of female fetuses could explain the entire distortion in China’s SRB, it would account for less than 5 percent of all abortions reported for 1986 . Another survey of 820 women conducted in central rural China in 2000 found that 36 percent of reported 301 induced abortions were female sex-selective abortions . Selective abortion could change with the introduction of prenatal care technology that can predict fetal sex with far greater certainty . Sex-selective abortions contribute mainly to the phenomenon of missing girls, which include both prenatal and postnatal missing girls. Prenatal missing girls result mainly from the high sex ratio at birth as a result of sex-selective induced abortions, and postnatal missing girls arise from excess female infant and child mortality due to the infanticide and the abandonment of female children as well as the discriminatory treatment of girls’ illnesses leading to excess . To date, studies have majorly focused on a quantitative discourse on SRBs, and adopted SRB as an indicator of sex selection . However, sex-selective abortion itself has been under-examined from a quantitative perspective, except for being recognized as one contributor to high SRBs. In this paper, we used annual data on the officially registered number of births, induced abortions, and SRB to estimate the annual number of sex-selective abortions, and then estimated two related proportions over the past decades. For certain years with data, we elucidate the difference by birth order, residence, and province. Below we first introduce the method, and then we introduce the data. After this, we present the results, and then conclusions. Let [12pt]{minimal} $$N_{a}$$ N a represent the number of induced abortions, [12pt]{minimal} $$N_{ssa}$$ N ssa denote the number of sex-selective induced abortions, [12pt]{minimal} $$B$$ B stand for the number of births, [12pt]{minimal} $$B_{m}$$ B m and [12pt]{minimal} $$B_{f}$$ B f the number of male and female births. [12pt]{minimal} $$SRB_{o}$$ S R B o denotes the observed [12pt]{minimal} $$SRB$$ SRB , and [12pt]{minimal} $$SRB_{n}$$ S R B n the normal [12pt]{minimal} $$SRB$$ SRB without selective abortions. China’s normal SRB is generally assumed to be 106 male births for every 100 female births , therefore we adopt [12pt]{minimal} $$SRB_{n}$$ S R B n as 106. It is generally assumed that male fetuses are not selectively aborted and the number of male births can be taken as a benchmark . We calculate the number of sex-selective induced abortions of female fetuses as: 1 [12pt]{minimal} $$N_{ssa} = }}{{SRB_{n} }} 100 - B_{f}$$ N ssa = B m S R B n × 100 - B f If we assume that spontaneous abortions and stillbirths were naturally conceived and gender-neutral, and sex-selective abortions [12pt]{minimal} $$N_{ssa}$$ N ssa are included in the total number of induced abortions [12pt]{minimal} $$N_{a}$$ N a , then the proportion is 2 [12pt]{minimal} $$P_{ssa/a} = }}{{N_{a} }} 100\%$$ P s s a / a = N ssa N a × 100 % Still, we can calculate the proportion of sex-selective induced abortions to the expected number of female births computed as the sum of female births and sex-selective induced abortions expressed in Formula . 3 [12pt]{minimal} $$P_{{ssa/( {ssa + B_{f} } )}} = }}{{( {N_{ssa} + B_{f} } )}} 100\%$$ P s s a / s s a + B f = N ssa N ssa + B f × 100 % The annual birth numbers can be obtained from the Ministry of Public Security for household registration, the National Family Planning Commission for monitoring births in family planning implementation, and the National Bureau of Statistics (NBS) as the authoritative organization of population statistics. Underreporting has been a concern in China’s birth data quality . China’s NBS was aware of the issue of underreporting in censuses, intercensal 1% population sample surveys, and annual one-per-thousand sample surveys, so the NBS adjusted upward the fertility levels . The registered total fertility rate (TFR) in the 2000 census is 1.22, but the fertility rate internally used by the NBS is 1.40 , indicating an official acknowledgment of a severe under-enumeration of births in census data. The crude birth rate (CBR) derived directly from 2000 and 2010 census data were 9.85‰ and 9.43‰ respectively, but the officially announced CBR for 2000 and 2010 were 14.03‰ and 11.90‰, respectively, as available in the official yearbooks. A new consensus argued that the NBS over-adjusted the fertility levels . In this paper, we mainly used the official data. For annual births, the NBS published annual year-end population size and crude birth rates. With these, we calculated annual births from 1980 to 2020, which are consistent with the data published in the annual statistics communique available from 1997 to 2020. In addition to underreporting in births, there is a sex-selective underreporting of female births and a controversial level of SRB. Sex-selective underreporting of female births is regarded as a determinant in China’s distortion in SRB . While most studies argue that female births have been predominantly under-enumerated and that the actual SRB should be lower than observed, some others believe that it is male births that are rather underreported. There are also claims that female under-reporting accounts for a very small portion of China’s higher SRB, and the majority of SRB distortion is attributed to sex-selective abortion of female fetuses . For SRB data, in census years or intercensal 1% population sample survey years. For other years after 1986, the SRB was calculated based on the annual one-per-thousand sample survey. For years before 1986, the SRB data were cited from the tabulated results from China’s 1988 National Fertility Survey (the two per thousand survey), a representative survey with an emphasis on the birth history of 459,000 married women of 15–57 years old in 1988 One concern is the comparability and consistency of SRB over time due to the marked variation in sample size from over 10 or 20 million in census years to just more than 10 thousand births in one-per-thousand sample surveys. In most years, the SRB was calculated based on a one-per-thousand sample, and the smallest sample size exceeds 10 thousand births, which attenuated the concern of comparability. The accuracy of the number of induced abortions is also a subject of debate. China registers induced abortions (NHFPC, 2021), and some demographers argue that the official figures are a reasonably accurate representation of total induced abortions . However, there were portions of induced abortions not registered in the official system , partly due to the social unacceptability of extramarital pregnancies in China and the illegalization and prohibition of non-medical sex-selective abortions . Recently young and unmarried women accounted for an increasing portion of abortions . While small-scale surveys indicated under-enumeration in abortions and sex-selective abortion numbers , the extent of under-reporting was not officially or reliably provided. For induced abortion data, we adopted annual induced abortion numbers from the yearbooks published by China’s Ministry of Health (renamed Public Health and Family Planning Commission in 2013 and Health Commission in 2018). This organization registered family planning surgical operations including induced abortions that could be tracked in hospitals, clinics, and family planning service stations. But whether induced abortions without official registration were estimated and included in the official figures or not was clear. In this paper, we used annual data on the officially registered number of births, induced abortions, and SRB data to estimate the number of sex-selective abortions and two related proportions. However, there is an obvious underreporting phenomenon in China's birth data, particularly the serious underreporting of female infants, leading to a skewed sex ratio and an overestimation of the proportion of sex-selective abortions. We employed survival analysis to test the accuracy of the birth number in China. Given that the calculated births number an entire year while census time points vary, we utilized the linear difference method to compute the adjusted survival rate, as illustrated in Fig. . The results indicate that the number of births is not significantly overestimated or underestimated, and the calculated number and proportion of sex-selective abortions using official data closely reflect the actual trend. SRB trend Figure depicts China’s overall SRB from 1980 to 2020 and SRBs by birth order in 1982, 1987, 1990, 2000, 2005, 2010, 2015, and 2020. Figure presents SRBs by residence in 1987, 1990, 2000, 2005, 2010, 2015, and 2020. The trend of China’s SRB and the difference in SRB by birth order and residence have been well documented, so we do not reiterate here. Induced abortions Table shows the number of births, SRB, as well as the number and proportion of induced abortions from 1980 to 2020. In the 1980s, the number of induced abortions was already large, 9.53 million for 1980. This figure increased rapidly, peaking at 14.37 million in 1983, and then sharply declined in 1984. Induced abortion has been officially advocated as a “remedial” measurement for out-of-quota pregnancies. By combined measures of reward, persuasion, and coercion, China’s mass sterilization campaign in 1983 numbered 14.37 abortions . However, the enforced measures and campaigns caused an uproar and ignited strong resistance, so the enforcement was relaxed, and the number of induced abortions declined . In the late 1980s and early 1990s, the number rebounded when the government imposed measures to minimize the unfavorable demographic impact of the relaxed policy introduced in 1984, and again strengthened its mandatory program on induced abortion and sterilization . Afterward, the number of induced abortions began to decline. Since the middle 1990s, the number of induced abortions has been quite stable, but recently it resurged. For some empirical research on induced abortions, China’s family planning policy was regarded as a dominant predicator for abortion . From 2009 to 2020, while the total number of family planning operations decreased from 22.77 million to 14.69 million, the percentage of induced abortions increased from 6.11 million and 26.8 percent to 8.96 million and 61.02 percent. On one hand, the conception rate of married childbearing women dropped from around 89.0 percent in 2009 to 80.6 percent in 2018 , increasing the risk of unwanted pregnancies and abortions. Measwhile, induced abortions are becoming more common among unmarried young women, especially rural–urban migrant females . Globally 27 percent of induced abortions were obtained by unmarried women in 2010–2014 , women in the 20–24 age group tend to have the highest abortion rate, and the bulk of abortions are accounted for by women in their twenties . China has been more and more tolerant of extramarital pregnancy and induced abortions, coinciding with the global trend. Worldwide, an estimated 50.4 million induced abortions occurred annually in 1990–94, and 56.3 million in 2010–2014 . China accounted for a marked proportion of the world total. Number and proportion of sex-selective induced abortions Table and Fig. present the number and proportions of sex-selective induced abortions. The number and proportions of sex-selective induced abortions to total abortions and expected births began to rise in the 1980s, remained at a high level in 1990 through 2010, and then declined. There has been a total of 30.04 million sex-selective aborted female fetuses from 1980 to 2020. The number and proportion of sex-selective induced abortions changed over time. In the early 1980s, the number and proportion of sex-selective abortions began to rise. The number and proportions were relatively high after 1984. In 1985, the number jumped to half a million and the two proportions rose to almost 5 percent. As ultrasound technology was first introduced in the early 1980s, some western scholars doubted the availability of this technology and its wide access in the 1980s in China and thought sex-selective abortions were trivial if not nonexistent . Most scholars believed that ultrasound technology was applied to sex selection, especially in the middle 1980s and later . Due to the uncertainty in the extent of the availability of the technology, and the government’s concerns over its illegal application for sex selection, it was difficult to gauge the prevalence of sex-selective abortions . Zeng et al. argued that the illegal use of this technology for sex identification was not rare. Our estimates showed that in 1980 through 1984 sex-selective induced abortions were just incipiently spreading. With this relaxation, rural couples in some provinces were permitted a second child and had the chance for repeated pregnancies and sex-selective induced abortions. Medical records of over 1.24 million pregnancies, presumably free of sex-selective underreporting, indicated that the SRB from 1988 through 1991 was 108.0, 108.3, 109.1, and 109.7. This evidence meant that some of those women had undergone sex-selective abortion before this pregnancy, and also proved the increasing prevalence of ultrasound B technology and sex-selective induced abortions in the late 1980s . Hull estimated that the number of sex-selective induced abortions would represent less than 5 percent of all induced abortions reported for 1986. Our estimate shows that in the middle and late 1980s the number of sex-selective abortions fluctuated around half a million and the proportions to total abortions and the expected births around 5 percent. For the two decades from 1990 to 2010, both the number and the proportions fluctuated at a very high level, oscillating around 1 million and stabilizing above 10 percent for most years. This high level could be ascribed to several factors. The first factor is further diffusion of sex-selection technology and readily accessible equipment. The second factor is the birth control implementation characterized by a predominant 1.5-child policy in most rural regions. By 1990, almost 20 provinces implemented the 1.5 child policy in rural areas . This policy devalued daughters, and implicitly stimulated couples to abort female fetuses . The third reason is the fertility squeeze effect, namely the role of the declining fertility in exerting pressure on couples to resort to sex-selective abortion . According to a survey in central rural China conducted in 2000, among 427 male and 279 female fetuses, 25.4 percent of the female fetuses were aborted, compared to just 1.6 percent of the male fetuses . An estimate of 19.1 percent of couples in 1.5-child policy areas underwent sex-selective abortions, compared to only 4.6 percent in two-child policy areas . In 2007, the National Population and Family Planning Commission conducted a survey of aborted fetuses in some provinces over seven years from 2000 to 2006. The survey was carried out by provincial and local family planning organizations. The number of fetuses identifiable by sex in 2000 to 2006 was 12,677, 10,922, 12,301, 13,742, 14,937, 15,541, 18,549 respectively, and the ratio of males for every 100 aborted female fetuses was 74.02, 70.47, 73.45, 72.51, 71.79, 71.31, 64.89 respectively. About a third of the aborted female fetuses were selectively aborted . Ever since 2010, the number dropped below 1 million and proportions below 10 percent. The prenatal sex identification technology has been easily available and affordable, and fertility remained at a very low level spontaneously, both of which contributed to sex-selective induced abortions. There are still other factors pushing the sex-selective abortion phenomenon down. China continuously advocates gender equality and the social status of women has improved markedly, while China combats “two illegals” . Surveys in rural China did indicate a markedly improved bargaining power of women over marriage and intra-household power structure, as well as a radical change in attitude towards sons and daughters . China may optimistically follow the SRB transition trajectory of South Korea and return to normal , which means that there would be no more sex-selective induced abortions. As gender equality has gained widespread acceptance, with the advancement of women's education leading to higher social standing, the desire for male offspring is noticeably diminishing . With the introduction of the universal two-child policy in 2016, there was a substantial decrease in sex-selective abortions, as evidenced by the reduction in China's SRB from 116.23 in 2016 to 112.28 in 2020. This shift reflects a positive trend towards gender equality and the appreciation of girls within Chinese society . Number and proportion by birth order We still examined the number and proportion of sex-selective induced abortions by birth order, as presented in Table . Whether the female fetus will be aborted or born after an ultrasound B-scan is related to the order of the pregnancy and children composition. The higher the pregnancy order, the more likely the female fetus is to be aborted . For first births, the number and proportion were negligible before 2010 but rose in 2010. Since the late 2000s, as fertility spontaneously declined further, people turned to sex-selective abortion for first births. The survey data collected in 2013 in western China indicate that couples with son preference would turn to sex-selective induced abortions to ensure a son at first birth, and then subdue their intention to produce a second child . For second, third and above births, the number and proportion rose from 1982 to 2000, remained at a high level during most of the period. However, the number and proportion of sex selective abortions for second births started to decline after 2000. In 2020, the number and proportion were 18.38 thousand and 0.73 percent, respectively. Scholars have discerned the sex-selective induced abortion contribution to the distortion in China’s sex ratio at the second birth order in the 1980s . In 2005, the intercensal 1% population sample survey indicates that the sex ratio rose steeply for second-order births while for first-order births it’s normal . According to our estimate, the abortion of second-order female fetuses contributed most to the total of sex-selective induced abortions, followed by third-order induced abortions. In 2000, the sex-selective induced abortions at second-order births accounted for 75.08 percent of all sex-selective induced abortions, whereas the sex-selective induced abortions at first-order births accounted for about 5.8 percent. In 2020, the percentage of first-order selective abortions rose to 52.01 percent due to the increase in the selective induced abortions at first birth as a result of fertility decline and to the change in birth order composition. Besides birth order, the heightened tendency of being aborted for female fetuses is correlated with the children's composition. Couples with only daughters are more likely to sex-select their next fetus to ensure a son. The 1990 census data indicate that the sex ratio of second births for women who had a daughter was 149.44, and 224.88 for women with only two daughters . In 2000, the survey conducted in central rural China with a 1.5 child policy as mentioned above showed that 92 percent of the female fetuses in the second pregnancy were aborted if the first child was a girl, versus 5 percent if the first child was a boy . In the official survey implemented in 2007 in Cai , among the aborted fetuses identifiable with gender, families with only one daughter recorded the lowest sex ratio of 50.18, and 70.06 for families with only two daughters . We calculate the proportion of sex-selective induced abortions by children composition in 1990 and 2000. The data for the 1990 calculation is 1 percent of the total population from the Integrated Public Use Micro-data Series ( https://ipums.org/ ) , including 3.21 million 15–49-year-old women with birth information. For 2000, we had no micro-data and adopted the SRB data from Sun . The proportion of sex-selective induced abortions to the expected births by birth composition is listed in Table . In the composition of children, the birth sex ratio of boys is low, and the proportion of sex-selective abortion is also low. In 1990, the sex ratio at birth was low at 106 among families with boys, resulting in a negative proportion of sex-selective abortions, which indicates that the decision to abort female fetuses was closely related to the gender composition of the children. There is a strong gender preference in China, and the preference for boys drives couples with only girls or more girls to continue to have children, prompting the proportion of selective abortion to increase. The results reveal that the heightened tendency of being selectively aborted for female fetuses is closely related to the sex composition. Number and proportion by residence Figure presents the number and proportion for city, township and village populations. In 1987 and 1990 the proportion of selective induced abortions to the expected births was very low but rose steeply in 2000. Village and township proportions were higher than that of the city. In the countryside, sons could provide labors in the agricultural production, continue the family lineage, and provide old-age support for parents, therefore sons were much valued among the rural population. Rural couples in the 1.5 child policy areas preferred to have one daughter first so they could have a second birth for a son to achieve “having both a son and a daughter” in compliance with the policy, but that also meant much pressure to ensure a son and higher likelihood of resorting to sex-selective induced abortions at second birth. The majority of sex-selective induced abortions of female fetuses took place among rural couples. In 2000, the number of sex-selective induced abortions for the city, township, and village populations was 125 thousand, 138 thousand, and 795 thousand, accounting respectively for 11.79 percent, 13.08 percent, and 75.14 percent of all sex-selective induced abortions. The number for village population declined to 102 thousand, and the proportion declined to 30.43 percent in 2020. In contrast, city and township selective induced abortions increased rapidly to 42.02 percent and 27.55 percent of total selective induced abortions respectively due partly to the rapid urbanization process from 36.92 percent in 2000 to 63.89 in 2020. It was generally argued that rural parents were more likely to sex select children. Sex selection mostly occurred among the rural population. However, when we broke down the proportion of selective abortions to the expected births by birth order for city, township, and village populations, as shown in Table , we found that the proportion for city and township populations were not significantly lower than the corresponding proportion for village population, indicating that urban people were not less likely to sex select their children than their rural counterparts for the same birth order. But as first births with a much lower percentage of selective abortions accounted for 87.14 percent and 80.08 percent of all city births and township births respectively in 2000, 78.28 percent and 64.04 percent in 2010, and 51.81 percent and 42.85 percent in 2020, much higher than the 66.22 percent in 2000,57.58 percent in 2010, and 40.03 percent in 2020 for village population, the overall percentage of selective abortions to expected births ranked highest among village population than the urban population. In recent years, the preference for boys in fertility has decreased while the preference for girls has increased. Since 2017, this preference has significantly favored girls , and the sex ratio at birth has gradually normalized. In 2020, the SRB for second birth in rural areas was 104.87, which was below the normal SRB of 106, suggesting a decline in sex-selective abortions. Number and proportion by province Table , Figs. and present the number and proportion of sex-selective induced abortions by province. Table presents the temporal trend of each province and the comparison among provinces in terms of the proportion of sex-selective induced abortions to the expected births. Generally, the proportion rose from 1990 to 2000 and 2010, then declined in 2015, the proportion of sex-selective abortion was notably lower in the western and northeastern areas compared to the central and eastern regions of China. These results highlight the provincial disparity in the proportion of sex-selective abortion. Figures and illustrate the spatial discrepancies with maps. The central and eastern provinces have a higher proportion and larger numbers due to their larger population and the fertility squeeze. China is characterized by a vast provincial difference in population indicators like population size, number, and order composition of births. According to the 2000, 2010, and 2020 censuses, nine, ten, and eleven provinces had a population of over 50 million, while five,four, and three provinces had a population of less than 10 million in those respective year . Along with the marked difference in population indicators was China’s provincially localized family planning policy . Around 2000, six provinces implemented one-child policy, including Beijing, Tianjin, Shanghai, Chongqing, Jiangsu, and Sichuan,five provinces implemented two-child policy, including Hainan, Ningxia, Qinghai, Yunnan, and Xinjiang; and the other 19 provinces implemented 1.5-child policy . Each province has its policy fertility circa 2000 and 2010 . In provinces granting a quota of 1.5 or two births per couple, couples relied heavily on selective induced abortion for the second pregnancy if their first-born was a daughter . Since 2013, China has gradually introduced the “selective two-child policy”, “universal two-child policy”, and “three-child policy”, thereby expanding the options for women of childbearing age. Meanwhile, people have lowered the importance of having a boy, leading to a decrease in the desire for sons. In 2020, sex-selective abortions significantly declined in every province, indicating that the policy has positively impacted the reduction of these practices. In 2020, the SRB in seven provinces–Shanxi, Inner Mongolia, Jilin, Heilongjiang, Tibet, Ningxia, and Xinjiang–fell below 106, returned to normal levels, resulting in a decline in the proportion of sex-selective abortions, or even negative values, indicating a weakening preference for boys in some provinces in China. The difference in selective abortions by province is a combined result of socioeconomic development, cultural environment, population base, family planning policy as well as many other factors. Due to space limitations, we do not investigate further into provincial differences. Figure depicts China’s overall SRB from 1980 to 2020 and SRBs by birth order in 1982, 1987, 1990, 2000, 2005, 2010, 2015, and 2020. Figure presents SRBs by residence in 1987, 1990, 2000, 2005, 2010, 2015, and 2020. The trend of China’s SRB and the difference in SRB by birth order and residence have been well documented, so we do not reiterate here. Table shows the number of births, SRB, as well as the number and proportion of induced abortions from 1980 to 2020. In the 1980s, the number of induced abortions was already large, 9.53 million for 1980. This figure increased rapidly, peaking at 14.37 million in 1983, and then sharply declined in 1984. Induced abortion has been officially advocated as a “remedial” measurement for out-of-quota pregnancies. By combined measures of reward, persuasion, and coercion, China’s mass sterilization campaign in 1983 numbered 14.37 abortions . However, the enforced measures and campaigns caused an uproar and ignited strong resistance, so the enforcement was relaxed, and the number of induced abortions declined . In the late 1980s and early 1990s, the number rebounded when the government imposed measures to minimize the unfavorable demographic impact of the relaxed policy introduced in 1984, and again strengthened its mandatory program on induced abortion and sterilization . Afterward, the number of induced abortions began to decline. Since the middle 1990s, the number of induced abortions has been quite stable, but recently it resurged. For some empirical research on induced abortions, China’s family planning policy was regarded as a dominant predicator for abortion . From 2009 to 2020, while the total number of family planning operations decreased from 22.77 million to 14.69 million, the percentage of induced abortions increased from 6.11 million and 26.8 percent to 8.96 million and 61.02 percent. On one hand, the conception rate of married childbearing women dropped from around 89.0 percent in 2009 to 80.6 percent in 2018 , increasing the risk of unwanted pregnancies and abortions. Measwhile, induced abortions are becoming more common among unmarried young women, especially rural–urban migrant females . Globally 27 percent of induced abortions were obtained by unmarried women in 2010–2014 , women in the 20–24 age group tend to have the highest abortion rate, and the bulk of abortions are accounted for by women in their twenties . China has been more and more tolerant of extramarital pregnancy and induced abortions, coinciding with the global trend. Worldwide, an estimated 50.4 million induced abortions occurred annually in 1990–94, and 56.3 million in 2010–2014 . China accounted for a marked proportion of the world total. Table and Fig. present the number and proportions of sex-selective induced abortions. The number and proportions of sex-selective induced abortions to total abortions and expected births began to rise in the 1980s, remained at a high level in 1990 through 2010, and then declined. There has been a total of 30.04 million sex-selective aborted female fetuses from 1980 to 2020. The number and proportion of sex-selective induced abortions changed over time. In the early 1980s, the number and proportion of sex-selective abortions began to rise. The number and proportions were relatively high after 1984. In 1985, the number jumped to half a million and the two proportions rose to almost 5 percent. As ultrasound technology was first introduced in the early 1980s, some western scholars doubted the availability of this technology and its wide access in the 1980s in China and thought sex-selective abortions were trivial if not nonexistent . Most scholars believed that ultrasound technology was applied to sex selection, especially in the middle 1980s and later . Due to the uncertainty in the extent of the availability of the technology, and the government’s concerns over its illegal application for sex selection, it was difficult to gauge the prevalence of sex-selective abortions . Zeng et al. argued that the illegal use of this technology for sex identification was not rare. Our estimates showed that in 1980 through 1984 sex-selective induced abortions were just incipiently spreading. With this relaxation, rural couples in some provinces were permitted a second child and had the chance for repeated pregnancies and sex-selective induced abortions. Medical records of over 1.24 million pregnancies, presumably free of sex-selective underreporting, indicated that the SRB from 1988 through 1991 was 108.0, 108.3, 109.1, and 109.7. This evidence meant that some of those women had undergone sex-selective abortion before this pregnancy, and also proved the increasing prevalence of ultrasound B technology and sex-selective induced abortions in the late 1980s . Hull estimated that the number of sex-selective induced abortions would represent less than 5 percent of all induced abortions reported for 1986. Our estimate shows that in the middle and late 1980s the number of sex-selective abortions fluctuated around half a million and the proportions to total abortions and the expected births around 5 percent. For the two decades from 1990 to 2010, both the number and the proportions fluctuated at a very high level, oscillating around 1 million and stabilizing above 10 percent for most years. This high level could be ascribed to several factors. The first factor is further diffusion of sex-selection technology and readily accessible equipment. The second factor is the birth control implementation characterized by a predominant 1.5-child policy in most rural regions. By 1990, almost 20 provinces implemented the 1.5 child policy in rural areas . This policy devalued daughters, and implicitly stimulated couples to abort female fetuses . The third reason is the fertility squeeze effect, namely the role of the declining fertility in exerting pressure on couples to resort to sex-selective abortion . According to a survey in central rural China conducted in 2000, among 427 male and 279 female fetuses, 25.4 percent of the female fetuses were aborted, compared to just 1.6 percent of the male fetuses . An estimate of 19.1 percent of couples in 1.5-child policy areas underwent sex-selective abortions, compared to only 4.6 percent in two-child policy areas . In 2007, the National Population and Family Planning Commission conducted a survey of aborted fetuses in some provinces over seven years from 2000 to 2006. The survey was carried out by provincial and local family planning organizations. The number of fetuses identifiable by sex in 2000 to 2006 was 12,677, 10,922, 12,301, 13,742, 14,937, 15,541, 18,549 respectively, and the ratio of males for every 100 aborted female fetuses was 74.02, 70.47, 73.45, 72.51, 71.79, 71.31, 64.89 respectively. About a third of the aborted female fetuses were selectively aborted . Ever since 2010, the number dropped below 1 million and proportions below 10 percent. The prenatal sex identification technology has been easily available and affordable, and fertility remained at a very low level spontaneously, both of which contributed to sex-selective induced abortions. There are still other factors pushing the sex-selective abortion phenomenon down. China continuously advocates gender equality and the social status of women has improved markedly, while China combats “two illegals” . Surveys in rural China did indicate a markedly improved bargaining power of women over marriage and intra-household power structure, as well as a radical change in attitude towards sons and daughters . China may optimistically follow the SRB transition trajectory of South Korea and return to normal , which means that there would be no more sex-selective induced abortions. As gender equality has gained widespread acceptance, with the advancement of women's education leading to higher social standing, the desire for male offspring is noticeably diminishing . With the introduction of the universal two-child policy in 2016, there was a substantial decrease in sex-selective abortions, as evidenced by the reduction in China's SRB from 116.23 in 2016 to 112.28 in 2020. This shift reflects a positive trend towards gender equality and the appreciation of girls within Chinese society . We still examined the number and proportion of sex-selective induced abortions by birth order, as presented in Table . Whether the female fetus will be aborted or born after an ultrasound B-scan is related to the order of the pregnancy and children composition. The higher the pregnancy order, the more likely the female fetus is to be aborted . For first births, the number and proportion were negligible before 2010 but rose in 2010. Since the late 2000s, as fertility spontaneously declined further, people turned to sex-selective abortion for first births. The survey data collected in 2013 in western China indicate that couples with son preference would turn to sex-selective induced abortions to ensure a son at first birth, and then subdue their intention to produce a second child . For second, third and above births, the number and proportion rose from 1982 to 2000, remained at a high level during most of the period. However, the number and proportion of sex selective abortions for second births started to decline after 2000. In 2020, the number and proportion were 18.38 thousand and 0.73 percent, respectively. Scholars have discerned the sex-selective induced abortion contribution to the distortion in China’s sex ratio at the second birth order in the 1980s . In 2005, the intercensal 1% population sample survey indicates that the sex ratio rose steeply for second-order births while for first-order births it’s normal . According to our estimate, the abortion of second-order female fetuses contributed most to the total of sex-selective induced abortions, followed by third-order induced abortions. In 2000, the sex-selective induced abortions at second-order births accounted for 75.08 percent of all sex-selective induced abortions, whereas the sex-selective induced abortions at first-order births accounted for about 5.8 percent. In 2020, the percentage of first-order selective abortions rose to 52.01 percent due to the increase in the selective induced abortions at first birth as a result of fertility decline and to the change in birth order composition. Besides birth order, the heightened tendency of being aborted for female fetuses is correlated with the children's composition. Couples with only daughters are more likely to sex-select their next fetus to ensure a son. The 1990 census data indicate that the sex ratio of second births for women who had a daughter was 149.44, and 224.88 for women with only two daughters . In 2000, the survey conducted in central rural China with a 1.5 child policy as mentioned above showed that 92 percent of the female fetuses in the second pregnancy were aborted if the first child was a girl, versus 5 percent if the first child was a boy . In the official survey implemented in 2007 in Cai , among the aborted fetuses identifiable with gender, families with only one daughter recorded the lowest sex ratio of 50.18, and 70.06 for families with only two daughters . We calculate the proportion of sex-selective induced abortions by children composition in 1990 and 2000. The data for the 1990 calculation is 1 percent of the total population from the Integrated Public Use Micro-data Series ( https://ipums.org/ ) , including 3.21 million 15–49-year-old women with birth information. For 2000, we had no micro-data and adopted the SRB data from Sun . The proportion of sex-selective induced abortions to the expected births by birth composition is listed in Table . In the composition of children, the birth sex ratio of boys is low, and the proportion of sex-selective abortion is also low. In 1990, the sex ratio at birth was low at 106 among families with boys, resulting in a negative proportion of sex-selective abortions, which indicates that the decision to abort female fetuses was closely related to the gender composition of the children. There is a strong gender preference in China, and the preference for boys drives couples with only girls or more girls to continue to have children, prompting the proportion of selective abortion to increase. The results reveal that the heightened tendency of being selectively aborted for female fetuses is closely related to the sex composition. Figure presents the number and proportion for city, township and village populations. In 1987 and 1990 the proportion of selective induced abortions to the expected births was very low but rose steeply in 2000. Village and township proportions were higher than that of the city. In the countryside, sons could provide labors in the agricultural production, continue the family lineage, and provide old-age support for parents, therefore sons were much valued among the rural population. Rural couples in the 1.5 child policy areas preferred to have one daughter first so they could have a second birth for a son to achieve “having both a son and a daughter” in compliance with the policy, but that also meant much pressure to ensure a son and higher likelihood of resorting to sex-selective induced abortions at second birth. The majority of sex-selective induced abortions of female fetuses took place among rural couples. In 2000, the number of sex-selective induced abortions for the city, township, and village populations was 125 thousand, 138 thousand, and 795 thousand, accounting respectively for 11.79 percent, 13.08 percent, and 75.14 percent of all sex-selective induced abortions. The number for village population declined to 102 thousand, and the proportion declined to 30.43 percent in 2020. In contrast, city and township selective induced abortions increased rapidly to 42.02 percent and 27.55 percent of total selective induced abortions respectively due partly to the rapid urbanization process from 36.92 percent in 2000 to 63.89 in 2020. It was generally argued that rural parents were more likely to sex select children. Sex selection mostly occurred among the rural population. However, when we broke down the proportion of selective abortions to the expected births by birth order for city, township, and village populations, as shown in Table , we found that the proportion for city and township populations were not significantly lower than the corresponding proportion for village population, indicating that urban people were not less likely to sex select their children than their rural counterparts for the same birth order. But as first births with a much lower percentage of selective abortions accounted for 87.14 percent and 80.08 percent of all city births and township births respectively in 2000, 78.28 percent and 64.04 percent in 2010, and 51.81 percent and 42.85 percent in 2020, much higher than the 66.22 percent in 2000,57.58 percent in 2010, and 40.03 percent in 2020 for village population, the overall percentage of selective abortions to expected births ranked highest among village population than the urban population. In recent years, the preference for boys in fertility has decreased while the preference for girls has increased. Since 2017, this preference has significantly favored girls , and the sex ratio at birth has gradually normalized. In 2020, the SRB for second birth in rural areas was 104.87, which was below the normal SRB of 106, suggesting a decline in sex-selective abortions. Table , Figs. and present the number and proportion of sex-selective induced abortions by province. Table presents the temporal trend of each province and the comparison among provinces in terms of the proportion of sex-selective induced abortions to the expected births. Generally, the proportion rose from 1990 to 2000 and 2010, then declined in 2015, the proportion of sex-selective abortion was notably lower in the western and northeastern areas compared to the central and eastern regions of China. These results highlight the provincial disparity in the proportion of sex-selective abortion. Figures and illustrate the spatial discrepancies with maps. The central and eastern provinces have a higher proportion and larger numbers due to their larger population and the fertility squeeze. China is characterized by a vast provincial difference in population indicators like population size, number, and order composition of births. According to the 2000, 2010, and 2020 censuses, nine, ten, and eleven provinces had a population of over 50 million, while five,four, and three provinces had a population of less than 10 million in those respective year . Along with the marked difference in population indicators was China’s provincially localized family planning policy . Around 2000, six provinces implemented one-child policy, including Beijing, Tianjin, Shanghai, Chongqing, Jiangsu, and Sichuan,five provinces implemented two-child policy, including Hainan, Ningxia, Qinghai, Yunnan, and Xinjiang; and the other 19 provinces implemented 1.5-child policy . Each province has its policy fertility circa 2000 and 2010 . In provinces granting a quota of 1.5 or two births per couple, couples relied heavily on selective induced abortion for the second pregnancy if their first-born was a daughter . Since 2013, China has gradually introduced the “selective two-child policy”, “universal two-child policy”, and “three-child policy”, thereby expanding the options for women of childbearing age. Meanwhile, people have lowered the importance of having a boy, leading to a decrease in the desire for sons. In 2020, sex-selective abortions significantly declined in every province, indicating that the policy has positively impacted the reduction of these practices. In 2020, the SRB in seven provinces–Shanxi, Inner Mongolia, Jilin, Heilongjiang, Tibet, Ningxia, and Xinjiang–fell below 106, returned to normal levels, resulting in a decline in the proportion of sex-selective abortions, or even negative values, indicating a weakening preference for boys in some provinces in China. The difference in selective abortions by province is a combined result of socioeconomic development, cultural environment, population base, family planning policy as well as many other factors. Due to space limitations, we do not investigate further into provincial differences. Sex-selective induced abortion of female fetuses has been practiced since the early 1980s in China, and will still be performed widely in the future. The spread of sex-selective induced abortion has been facilitated by China’s family planning program that adopted abortion as one remedial measure for out-of-quota pregnancies. This phenomenon, first as a countermeasure by farmers against the birth constraints, is now an active measure by couples with son preference in the fertility squeeze context. Due to legal, ethical, and moral considerations, the data on sex-selective induced abortions is unavailable. In this paper, with official data we estimated the number and proportions of sex-selective induced abortions of female fetuses, the findings are as follows. The annual proportions and number of sex-selective induced abortions of female fetuses began to rise in the 1980s, remained at a high level in 1990 through 2010, then declined. This practice of selective abortion was made instrumentally possible by the large-scale introduction of ultrasound B machines in the early 1980s but was driven mainly by the conflict between the birth constraint of the nationwide family planning and the pursuit of sons by the peasants . At the beginning stage, the proportions and number were low, but with the diffusion of this technology and the stringent implementation of family planning policy, the proportions and number remained at a high level for two decades and declined in the 2010s due to the mainstreaming of gender equality and improved status of women. Worldwide sex-selective abortions represented around 3 percent of all induced abortions , China was much higher than that. More recently, with the relaxation of family planning and the spontaneous fertility, people mostly intended to have only one child or two children, selective abortion is still being practiced. China had a large scale of sex-selective abortion phenomenon, leading to a serious gender imbalance in the society. there has been a total of up to 30.04 million of sex-selective induced abortions of female fetuses between 1980 and 2020 according to this estimate. This total number is higher than that of 11.9 million (with interval confidence 8.5–15.8 million) missing female, and higher than 10.60 million (with interval confidence 8.0–13.6 million) missing female in India for the period between 1970 and 2017obtained by Chao et al. , but is lower than the general claim that China is short of 30 to 40 million missing females , and lower than 45.81 million missing females that comprise both sex-selective abortion and excess female child mortality . The problem of missing girls is growing faster in China than India . This suggests that China's gender imbalance is even worse than India's, with far-reaching consequences for society. But this number should be interpreted with caution, as it was affected by some factors. If we took the lower bounds or upper bounds of all SRBs, the total number of selective abortions of female fetuses would be reduced to 20.41 million or increased to 36.88 million. Still, due to underreporting in total induced abortions, the proportion of selective induced to total induced births should be downward adjusted accordingly. The proportions and number of selective abortions varied with birth order and children composition. For first births, the proportions and number were negligible before 2010, but rose in 2010, as couples with son preference tended to abort the female fetus at first order recently to have a son with just one birth. The proportions and number for second, third and higher-order births rose in the 1980s, and remained at a very high level during most of the period, then declined after 2010. With the availability of sex identification technology, a county could increase the probability of a male birth by 1.3 and 2.4 percentage points for second-order and third- or higher-order births, or even by 4.8 percentage points for second-order births and 6.8 percentage points for third-order births for couples with no sons . China’s localized family planning policy, especially the 1.5 child policy in rural areas, stimulated couples to abort female fetuses . Sex-selective abortions are most likely to occur when couples had only daughters, the more daughters there are, the more likely for the next female birth to be aborted. The higher the order, the more likely it is for women to visit private clinics for the gender information of the fetus even at a higher cost . However, with the spontaneous fertility decline, birth order composition change and the intention of aborting first-order female fetuses, the proportion of abortion of first-order fetuses to total selective abortions rose markedly, from 5.8 percent in 2000 to 32.70 percent in 2010. The increasing costs of raising children have forced parents to realize their desire for a son within the confines of fewer births , the fertility decline increased the pressure for sex selection, and people with a strong son preference tend to selectively abort female first-order fetuses . City, township, and village populations showed a difference in proportion and number in census or intercensal 1% population survey years. The proportion of selective induced abortions to the expected births rose steeply in 2000 from a past low level, village and township proportions were much higher than that of the city. Sex-selective induced abortions majorly took place among the rural population, but the proportion of urban selective abortions rose markedly due partly to the urban–rural composition change in births as a result of the rapid urbanization process after 2000. When comparing by birth order, city and township proportion was higher than that corresponding proportion of village population, urban people were not less likely to sex select their children than their rural counterparts. Just as the research by birth order in SRB , when we discuss the selective abortion difference between urban and rural populations, it is more enlightening to provide comparison by birth order after eliminating the effect of birth order composition. For provincial comparison, the proportion generally rose from 1990 to 2000 and 2010, then declined in 2015. Central and eastern provinces had a higher proportion and larger numbers. With the liberalization of the three-child policy, several provinces are seeing a return to a normal sex ratio at birth. This shift helps reduce discrimination against girls and women and promotes gender equality, fostering a more balanced society where all genders are valued. After four decades of selectively aborting female fetuses, China is now confronted with numerous challenging demographic and public policy questions that have arisen from sex-selective induced abortions and the subsequent phenomenon of missing girls , which has led to an imbalanced population sex structure and a male marriage squeeze , and affected China’s population trajectory in the long term . Accordingly, people have adjusted their economic behavior in a context of a shortage of marriageable women, for example accumulating wealth for marriage and raising the bride price and marriage expenditure to compete in the marriage market . The long-term practice of selective induced abortions of female fetuses has affected and will continue to affect many aspects of Chinese society, the implications of which should now be handled with caution in China. China’s birth control policy and its enforcement of induced abortions as a birth control measure have been widely criticized , and the imbalanced sex structure and excess males due to sex-selective induced abortions and their potential threat to society have been widely discussed. Besides, there are some direct and indirect costs of this sex-selective practice, such as the immediate costs of sex-selective induced abortion related operations and complications, the costs of medical care for longer-term health consequences. Sex-selective induced abortion is generally stigmatized. Mothers who underwent sex-selective abortions suffer psychological pressure and health risks. More broadly, selective abortion deprives the aborted fetus of the right to life and tramples on the birth right of women. The academic consensus in China aligns with the official stance that non-medical sex-selective abortion, as well as non-medical prenatal sex diagnosis, are morally unacceptable and should be prohibited by law . China continuously advocates gender equality and the social status of women has improved markedly, while China combats “two illegals” . In 2014, Hubei province alone rewarded more than 540 people who reported “two illegals” cases, broke 4193 “two illegals” cases, and punished 422 doctors who practiced “two illegals” . However, China’s prohibitive laws and polices have never been rigorously implemented and that the penalties for violation of these codes are not made explicit and are often very lenient in practice. Women's reproductive health can be improved by reducing unintended pregnancies and induced abortions, as well as by enhancing sexual health education, elevating sexual morality, decreasing premarital sex, increasing awareness of contraception, fostering a positive and healthy conception of fertility. A relaxation in the one-child policy could allow more parents to have a son without resorting to sex selection . The implementation of the three-child policy will enhance fertility support, improve women's education, promote gender equality, reduce discrimination against girls and women, and shift gender preference towards "no preference." Additionally, we will continue to combat the "two illegals", monitor new technologies in gender identification, and remain vigilant against gender selection in assisted reproductive technology. With high levels of development, modernization, and urbanization, son preference will decline, and the value of sons and daughters tends to equalize . In China, the deeply entrenched son preference is currently waning due to low fertility intention, the pressure from the tight male marriage market, the heavy burden of marriage, and the improved status of women. We hope all this will reduce sex-selective abortion and improve gender equality in China. |
Mini-Sternotomy | f059fd71-294b-425a-b1dd-0514258a2eff | 11925351 | Thoracic Surgery[mh] | Aortic valve replacement (AVR) have expanded dramatically over the years owing to
advancements in treatment modalities and technologies . Surgical options encompass conventional and
minimally invasive approaches, both showing similar mortality rates . Nevertheless, minimally invasive
AVR has become increasingly popular due to its ability to avoid complete sternotomy.
It offers several benefits, including reduced ventilation time, shorter intensive
care unit (ICU) and hospital stays, lower blood loss and transfusion requirements,
reduced atrial fibrillation rates, faster recovery, and better cosmesis . Moreover, it is equally safe and
efficient with reduced hospital costs . Mini-sternotomy (MS) and right anterior mini-thoracotomy (RAMT) are the most common
minimally invasive approaches to AVR. These surgical techniques offer distinct
advantages and technical challenges ranging from surgical exposure to postoperative
recovery and outcomes. MS is performed through a 3- to 7-cm midline skin incision
with upper partial sternotomy, and it is associated with less postoperative pain,
less blood loss, and lower rates of wound infection and dehiscence . Conversely, RAMT is performed through a 5- to 7-cm incision in
the right second intercostal space without traumatizing the sternum. Comparative
data showed lower postoperative pain and shorter ICU length of stay (LOS) with the
use of thoracotomy . As one of the most performed procedures worldwide, AVR undergoes continuous
refinement to improve surgical outcomes, minimize invasiveness, and optimize
recovery. However, data comparing MS and RAMT are limited, and there are no
randomized trials in the literature. Salmasi et al. aggregated
previously data on this topic showing significant differences regarding
postoperative outcomes. Although its central aim was to compare directly these two
techniques, this work has some intrinsic limitations that prevent it from providing
a clear overview of this subject. For instance, a sensitivity analysis to identify
outliers was not performed, and just < 10% of patients (n=2,926) from the current
literature were included. In this context, previous studies with populations basically from single-center
registries have demonstrated the safety of RAMT in relation to perioperative
mortality , however a
meta-analytical analysis involving all the major multinational registries on the
subject based in the new standards of systematic reviews has not yet been performed.
Ultimately, significant innovations marked the last 10 years of minimally invasive
cardiac surgery and specially valve therapies. As the current guidelines do not
mention any kind of procedure preference in different situations, choosing between
the two approaches remains a surgeon’s decision and requires careful
consideration. In this context, we performed a systematic review of the topic and a meta-analysis of
contemporary studies to compare major clinical outcomes between the two
strategies.
Ethical approval of this analysis was not required as no human or animal subjects
were involved. This review was registered with the National Institute for Health
Research International Registry of Systematic Reviews (CRD42023451208). Search Strategy We performed a comprehensive literature search to identify contemporary studies
reporting shortand long-term outcomes between patients who underwent AVR with
the two different techniques (MS or RAMT). Searches were run on March, 2023, in
the following databases: Ovid MEDLINE®, Embase, and Google Scholar. The
search strategy is available in . Study Selection and Data Extraction The study selection followed the Preferred Reporting Items for Systematic Reviews
and Meta-Analyses (PRISMA) strategy. After de-duplication
( i.e. , exclusion of records with the same Digital Object
Identifier, but with possibly minimal differences in title), the records were
screened by two independent reviewers (DS and AAR). Any discrepancies and
disagreements were resolved by a third author (TC). Titles and abstracts were
reviewed against predefined inclusion and exclusion criteria. Studies were
considered for inclusion if they were written in English and reported direct
between patients who underwent AVR with the two different techniques (MS or
RAMT). Animal studies, abstracts, case reports, commentaries, editorials, expert
opinions, conference presentations, and studies which did not report the
outcomes of interest were excluded. The full text was pulled for the selected
studies for a second round of eligibility screening. References for articles
selected were also reviewed for relevant studies not captured by the original
search. Studies by the same author comprising the same population were
critically analyzed to avoid population overlapping. The Risk of Bias in Non-Randomized Studies of Interventions (or ROBINS-I) tool
was systematically used to assess included studies for risk of bias . The studies and their
characteristics were classified into low, moderate, and serious risk of bias.
Two independent reviewers (DS and AAR) assessed the risk of bias. When there was
a disagreement, a third reviewer (TC) checked the data and made the final
decision ( ). Two reviewers (DS nd AAR) independently performed data extraction. Accuracy was
verified by a third author (TC). The extracted variables included study
characteristics (publication year, sample size, study design, country, study
period, and presence or absence from population adjustment) as well as patient
demographics (age, sex, body mass index [BMI], mean left ventricular ejection
fraction [LVEF], European System for Cardiac Operative Risk Evaluation
[EuroSCORE], hypertension, diabetes, chronic kidney disease, peripheral artery
disease [PAD], chronic obstructive pulmonary disease [COPD], and nature of the
aortic valve disease nature). Selected Endpoints The primary endpoint was perioperative mortality, defined as 30-day or
in-hospital mortality. The secondary endpoints were reoperation for bleeding,
stroke, operation duration, ICU LOS, cardiopulmonary bypass (CPB) time,
cross-clamping time, hospital LOS, paravalvular leak, renal complications,
conversion to full sternotomy, permanent pacemaker implantation, and wound
infection. Random effects models were performed. Statistical Analysis Odds ratio (OR) with 95% confidence interval (CI) and P -values
were calculated for each of the clinical outcomes. Standard mean difference
(SMD) was calculated for the continuous variables. An OR > 1 indicated that
the outcome was more frequently present in the MS group. A SMD > 0
corresponded to longer stay/time in the MS group. Forest plots were created to
represent the clinical outcomes. Chi-squared and I2 tests were executed for the
assessment of statistical heterogeneity . By using a random effects model, the ORs were
combined across the studies . Inherent clinical heterogeneity between the studies was
balanced via the implementation of a random effects model . Funnel plots were constructed to assess publication
bias. All analyses were completed through the “metafor” package of R Statistical
Software (version 4.0.2), Foundation for Statistical Computing (Vienna,
Austria).
We performed a comprehensive literature search to identify contemporary studies
reporting shortand long-term outcomes between patients who underwent AVR with
the two different techniques (MS or RAMT). Searches were run on March, 2023, in
the following databases: Ovid MEDLINE®, Embase, and Google Scholar. The
search strategy is available in .
The study selection followed the Preferred Reporting Items for Systematic Reviews
and Meta-Analyses (PRISMA) strategy. After de-duplication
( i.e. , exclusion of records with the same Digital Object
Identifier, but with possibly minimal differences in title), the records were
screened by two independent reviewers (DS and AAR). Any discrepancies and
disagreements were resolved by a third author (TC). Titles and abstracts were
reviewed against predefined inclusion and exclusion criteria. Studies were
considered for inclusion if they were written in English and reported direct
between patients who underwent AVR with the two different techniques (MS or
RAMT). Animal studies, abstracts, case reports, commentaries, editorials, expert
opinions, conference presentations, and studies which did not report the
outcomes of interest were excluded. The full text was pulled for the selected
studies for a second round of eligibility screening. References for articles
selected were also reviewed for relevant studies not captured by the original
search. Studies by the same author comprising the same population were
critically analyzed to avoid population overlapping. The Risk of Bias in Non-Randomized Studies of Interventions (or ROBINS-I) tool
was systematically used to assess included studies for risk of bias . The studies and their
characteristics were classified into low, moderate, and serious risk of bias.
Two independent reviewers (DS and AAR) assessed the risk of bias. When there was
a disagreement, a third reviewer (TC) checked the data and made the final
decision ( ). Two reviewers (DS nd AAR) independently performed data extraction. Accuracy was
verified by a third author (TC). The extracted variables included study
characteristics (publication year, sample size, study design, country, study
period, and presence or absence from population adjustment) as well as patient
demographics (age, sex, body mass index [BMI], mean left ventricular ejection
fraction [LVEF], European System for Cardiac Operative Risk Evaluation
[EuroSCORE], hypertension, diabetes, chronic kidney disease, peripheral artery
disease [PAD], chronic obstructive pulmonary disease [COPD], and nature of the
aortic valve disease nature).
The primary endpoint was perioperative mortality, defined as 30-day or
in-hospital mortality. The secondary endpoints were reoperation for bleeding,
stroke, operation duration, ICU LOS, cardiopulmonary bypass (CPB) time,
cross-clamping time, hospital LOS, paravalvular leak, renal complications,
conversion to full sternotomy, permanent pacemaker implantation, and wound
infection. Random effects models were performed.
Odds ratio (OR) with 95% confidence interval (CI) and P -values
were calculated for each of the clinical outcomes. Standard mean difference
(SMD) was calculated for the continuous variables. An OR > 1 indicated that
the outcome was more frequently present in the MS group. A SMD > 0
corresponded to longer stay/time in the MS group. Forest plots were created to
represent the clinical outcomes. Chi-squared and I2 tests were executed for the
assessment of statistical heterogeneity . By using a random effects model, the ORs were
combined across the studies . Inherent clinical heterogeneity between the studies was
balanced via the implementation of a random effects model . Funnel plots were constructed to assess publication
bias. All analyses were completed through the “metafor” package of R Statistical
Software (version 4.0.2), Foundation for Statistical Computing (Vienna,
Austria).
Study Characteristics A total of 1,123 studies were retrieved from the systematic search, of which 10
met the criteria for inclusion in the final analysis [ , - ] . shows the PRISMA flowchart for study selection. shows the details of the included
studies. The overall patient number was 30,524, while MS patients were 20,932
and RAMT patients were 9,592. Included studies were published between 2014 and
2023, all the studies were non-randomized, observational, and with retrospective
nature. Six studies included risk adjusted populations. Two studies were
multinational databases. Patient Characteristics summarizes the
demographic data of the overall patient population. There was no relevant
difference regarding age, sex, BMI, LVEF, EuroSCORE, presence of hypertension,
diabetes, PAD, COPD, and combined aortic disease. Meta-Analysis Central Image and outline the
detailed results of the meta-analysis. Primary Endpoint shows the forest plot for
perioperative mortality. There was not a statistical significance between the
two approaches (OR: 0.83; 95% CI 0.57-1.21; P =0.33). shows the
leave-one-out analysis showing that most of the studies confirm the robustness
of the analysis, with minimal variations of the CI. provides the funnel plot for the
publication bias assessment. Secondary Outcomes shows the forest plot for
reoperation for bleeding. The RAMT group showed significantly higher rates of
reoperation for bleeding in comparison with the MS group (OR: 0.69; 95% CI
0.50-0.97; P =0.03). shows the forest plot for stroke.
The RAMT group showed significantly lower rates of stroke in comparison with the
MS group (OR: 1.27; 95% CI 1.01-1.60; P =0.04). shows the forest plot for
operation duration. The RAMT group showed significantly longer operation
duration in comparison with the MS group (SMD: -0.58; 95% CI -1.01 to -0.14; P =0.01). Further comparison regarding ICU LOS, CPB time, cross-clamping time, hospital
LOS, paravalvular leak, renal complications, conversion to full sternotomy,
permanent pacemaker implantation, and wound infection were not statistically
significant ( - ). Information regarding the valve types
“sutureless vs. stented” or “tissue vs. mechanic” were only mentioned in two of the studies .
A total of 1,123 studies were retrieved from the systematic search, of which 10
met the criteria for inclusion in the final analysis [ , - ] . shows the PRISMA flowchart for study selection. shows the details of the included
studies. The overall patient number was 30,524, while MS patients were 20,932
and RAMT patients were 9,592. Included studies were published between 2014 and
2023, all the studies were non-randomized, observational, and with retrospective
nature. Six studies included risk adjusted populations. Two studies were
multinational databases.
summarizes the
demographic data of the overall patient population. There was no relevant
difference regarding age, sex, BMI, LVEF, EuroSCORE, presence of hypertension,
diabetes, PAD, COPD, and combined aortic disease.
Central Image and outline the
detailed results of the meta-analysis.
shows the forest plot for
perioperative mortality. There was not a statistical significance between the
two approaches (OR: 0.83; 95% CI 0.57-1.21; P =0.33). shows the
leave-one-out analysis showing that most of the studies confirm the robustness
of the analysis, with minimal variations of the CI. provides the funnel plot for the
publication bias assessment.
shows the forest plot for
reoperation for bleeding. The RAMT group showed significantly higher rates of
reoperation for bleeding in comparison with the MS group (OR: 0.69; 95% CI
0.50-0.97; P =0.03). shows the forest plot for stroke.
The RAMT group showed significantly lower rates of stroke in comparison with the
MS group (OR: 1.27; 95% CI 1.01-1.60; P =0.04). shows the forest plot for
operation duration. The RAMT group showed significantly longer operation
duration in comparison with the MS group (SMD: -0.58; 95% CI -1.01 to -0.14; P =0.01). Further comparison regarding ICU LOS, CPB time, cross-clamping time, hospital
LOS, paravalvular leak, renal complications, conversion to full sternotomy,
permanent pacemaker implantation, and wound infection were not statistically
significant ( - ). Information regarding the valve types
“sutureless vs. stented” or “tissue vs. mechanic” were only mentioned in two of the studies .
The analysis suggests that both techniques present similar perioperative mortality
rates for AVR. However, RAMT is associated with higher rates of reoperation for
bleeding, lower rates of stroke and longer operation duration time. There was no
difference regarding ICU LOS, CPB time, cross-clamping time, hospital LOS,
paravalvular leak, renal complications, conversion to full sternotomy, permanent
pacemaker implantation, and wound infection. Minimally invasive alternatives for AVR have demonstrated comparable safety and
efficacy to conventional median sternotomy . Their ability to
provide favorable clinical outcomes is associated with advancements in surgical
techniques, instrumentation, and perioperative care . This
paradigm shift towards less invasive approaches has been driven by the pursuit of
reduced morbidity, shorter hospital stays, expedited recovery, and greater patient
satisfaction. Although this meta-analysis did not show any advantage in relation to
any of the minimally invasive techniques when compared to each other, there is a
relative reduction in postoperative pain associated with faster mobilization and
recovery when comparing these techniques to traditional AVR with complete
sternotomy [ - ] . Previous comparisons between MS and RAMT provided valuable insights into their
respective benefits and limitations [ , - ] , but the best approach remains on debate. This
systematic review and meta-analysis demonstrated equivalent perioperative mortality
rates between the two groups, what emphasizes the competency of surgeons in
performing both techniques and the overall safety of minimally invasive AVR. Additionally, RAMT showed a higher rate of reoperation for bleeding when compared to
MS. This finding suggests that MS may offer more effective hemostasis strategies due
to better visualization, resulting in fewer postoperative bleeding complications.
However, it does not decrease the safety of RAMT, as the bleeding events were not
related to higher perioperative mortality in this analysis. The information is,
however, relevant for special patient populations, for instance, Jehovah’s
witnesses. An important and curious fact about the work is the fact that MS was associated with
more frequent stroke events. Although both techniques demand aortic cross-clamping
for the surgery, the majority of RAMT procedures are performed using femoral
cannulation, and it could be unexpectedly related with the reduction of
cerebrovascular events . Furthermore, the longer operation times observed with RAMT may be attributed to the
learning curve associated with this technique. As surgeons become more experienced
with RAMT, operation times could potentially decrease, enhancing overall surgical
efficiency . The choice between MS and RAMT is influenced by various factors, including surgeon
experience and patient-specific considerations. The analysis revealed that most of
the included studies had larger cohorts of MS patients, possibly indicating a
preference for this technique among surgeons due to familiarity or perceived ease of
execution. This raises questions about the impact of surgeon experience on the
outcomes and preferences for a specific technique. It's worth acknowledging that the lack of standardized reporting regarding valve
types and replacement methods introduces heterogeneity into the analysis. The type
of valve used and the method of replacement can also influence operation duration
and, consequently, patient outcomes. As newer technology and instruments continue to
develop, the creation of modern rapid deployment stented valves and refined
instrumentation can contribute to enhanced patient safety and long-term outcomes for
minimally invasive AVR. Finally, as transcatheter aortic valve replacement gains
prominence, there is an increasing demand for surgical solutions that offer superior
cosmetic results, shorter hospital stays, and long-term durability. The insights
gained from this analysis contribute to the ongoing dialogue surrounding the
selection of surgical approaches, particularly in the context of evolving patient
expectations and advancing technology. Study Strength and Limitations We analyzed 12 different outcomes besides mortality. Six out of ten studies
presented data from risk-adjusted populations and most of them had a
well-designed methodological approach. However, this work has the intrinsic
limitations of observational series, including the risk of methodological
heterogeneity of the included studies and residual confounders. In addition,
treatment allocation bias is likely present in all observational series
comparing two therapies with different invasiveness. Ghoreishi et al. provided a
significant number of patients compared to the other studies . Finally, information
regarding whether the same surgeons performed both techniques or not was not
mentioned in all of the studies. Besides that, there was not enough data in the
studies regarding the type of prosthesis used, which can substantially influence
long-term results, and regarding the complications resulting from peripheral
cannulation. Precisely for this reason, this work has an important role in
generating hypotheses and indicating possible clinical correlations between
clinical events that can definitely support the design of new RCTs. Therefore,
this meta-analysis based on observational studies has per se the limitation of
not being able to produce relationships with a causal character.
We analyzed 12 different outcomes besides mortality. Six out of ten studies
presented data from risk-adjusted populations and most of them had a
well-designed methodological approach. However, this work has the intrinsic
limitations of observational series, including the risk of methodological
heterogeneity of the included studies and residual confounders. In addition,
treatment allocation bias is likely present in all observational series
comparing two therapies with different invasiveness. Ghoreishi et al. provided a
significant number of patients compared to the other studies . Finally, information
regarding whether the same surgeons performed both techniques or not was not
mentioned in all of the studies. Besides that, there was not enough data in the
studies regarding the type of prosthesis used, which can substantially influence
long-term results, and regarding the complications resulting from peripheral
cannulation. Precisely for this reason, this work has an important role in
generating hypotheses and indicating possible clinical correlations between
clinical events that can definitely support the design of new RCTs. Therefore,
this meta-analysis based on observational studies has per se the limitation of
not being able to produce relationships with a causal character.
The analysis suggests that both techniques present similar perioperative mortality
rates for AVR. However, RAMT is associated with higher rates of reoperation for
bleeding, lower rates of stroke, and longer operation duration time. There was no
difference regarding ICU LOS, CPB time, cross-clamping time, hospital LOS,
paravalvular leak, renal complications, conversion to full-sternotomy, permanent
pacemaker implantation, and wound infection.
|
Hidden artistic complexity of Peru’s Chancay culture discovered in tattoos by laser-stimulated fluorescence | 2302e4ab-9da4-432b-9cdf-9fd11b213134 | 11789198 | Surgical Procedures, Operative[mh] | This first application of the LSF technique to tattoos on mummified human remains has yielded otherwise hidden results . LSF was able to backlight the pre-Columbian tattoos by making the skin fluoresce brightly, but not the likely carbon-based black ink . With postprocessing for image equalization, saturation, and color balance , the skin becomes white behind the black outlines of the tattoo art. This reveals in these specimens detailed density differences in the ink and virtually eliminates the ink “bleed”, highlighting the precise locations of the original tattoo markings, as seen in . These fine 0.1 to 0.2 mm wide lines are narrower than those produced by the standard #12 modern tattoo needle (0.35 mm) and were only seen in a limited number of mummified individuals out of over 100 inspected specimens. Most specimens showed tattoos that were more amorphous patches with poorly defined edges (e.g., ). LSF was able to more clearly define the features in the artwork by increasing the contrast between the skin and the ink .
The first application of the LSF technique in the study of tattoos on mummified human remains revealed otherwise hidden details not seen using existing techniques like infrared imaging . The 0.1 to 0.2 mm wide linear details reflect the fact that each ink dot was placed deliberately by hand with great skill, creating a variety of exquisite geometric and zoomorphic patterns. We can assume that this technique involved a pointed object finer than a standard #12 modern tattoo needle, probably a single cactus needle or sharpened animal bone based on known materials available to the artists . This suggests that an additional tool was probably unnecessary to tap the point into the skin. The width of the lines corroborates the use of the widely known traditional needle-based tattooing technique, as opposed to “cutting and filling” with ink. In the context of Peruvian archeological cultures, our LSF results indicate the cultural art in the studied tattoos had exceptionally fine scale detail and patterns not seen in other existing Chancay cultural art, e.g., associated pottery, textiles, and rock art , supporting a more partitive decorative organization among the Chancay [ sensu ]. As the most intricate art found in the Chancay culture to date, tattoos were potentially another important category of object—along with textiles— in which aesthetic expectations and performances appear to be concentrated as part of the aesthetic locus of the Chancay . Our investigation found that intricate tattoos were not present on all mummified human remains suggesting that they were restricted to a subset of the population, but future work involving new mummy discoveries would be needed to test this. The study therefore reveals higher levels of artistic complexity in pre-Columbian Peru than previously appreciated, which expands the degree of artistic development found in South America at this time. LSF imaging therefore has the potential to reveal similar milestones in human artistic development through the study of other ancient tattoos, including the evolution of tattooing methods.
Analyses were undertaken on mummified human remains curated at the Arturo Ruiz Estrada Archaeological Museum of the José Faustino Sánchez Carrión National University of Huacho, Peru. These remains were discovered in 1981 during a rescue excavation led by Dr. Arturo Ruiz and his team at the Cerro Colorado cemetery in the Huaura Valley of Peru, an archeological zone located between Puerto de Huacho and Barrio de Amay, near the modern city of Huacho. The first results of radiocarbon dating indicate a chronological affiliation between 1222 and 1282 AD, belonging to the Chancay culture from the pre-Columbian Andes. Mummies, as well as individual limbs, were examined using a handheld UV flashlight as a triage for further imaging under LSF. A 405 nm laser line was scanned across the artwork during a time exposure in a dark room . The images were postprocessed uniformly for equalization, saturation, and color balance in Photoshop . Some mummies were encased and inaccessible for scale bar placement, so in these specimens, the scale was estimated using focus and distance. Consent to the research was granted by the Director of the Arturo Ruiz Estrada Archaeological Museum, as part a research project by Judyta Bąk, which was approved by the José Faustino Sánchez Carrión National University authorities. All ethical implications arising from the research were taken into account including but not limited to remains of a historical person, indigenous people, cultural and religious sensitivity and living descendants. Mummified human remains were handled and studied with care, in strict accordance with the university’s rules and regulations in following with standard archeological practice.
|
Dental behaviour support: can we improve qualitative research on patient experience? | 5776a2e8-3529-4df8-a9ed-db5bac36235e | 11436384 | Dental[mh] | Geddis-Regan A, Fisal A B A, Bird J, Fleischmann I, Mac Giolla Phadraig C. Experiences of dental behaviour support techniques: A qualitative systematic review. Community Dent Oral Epidemiol 2024; 10.1111/cdoe.12969. GRADE Rating:
This PROSPERO-registered, qualitative systematic review focused upon patient, carer and parental experiences of dental behaviour support (DBS) . DBS may be viewed as an overarching term which includes a wide range of options from communication-based interventions and specific behaviour management techniques through to approaches including dental general anaesthesia (DGA) . A wide range of DBS techniques are used regularly in the support and delivery of professional oral health care, but their evaluation has tended to focus upon trials using a wide range of quantitative outcome measures with the risk this may result in selective reporting and/or unnecessary heterogeneity . This systematic review reports a clear aim and objectives, research question, a PICOS-guided search strategy and PRISMA flow diagram. The inclusion and exclusion criteria encompass all patient groups recruited to qualitative studies since 1997. However, only studies published in English were included and there is a risk that relevant studies may not have been identified if other terminologies were used. The study population was sensibly broad, including children and adults, as well as parents and carers acting in a supportive capacity. The 23 included studies were from high-income countries and the most studied DBS technique was general anaesthesia. Notably, sixteen of the included studies focused upon children with none centred upon medically compromised or older adults. The review authors noted that the theoretical stance, as well as cultural and contextual factors were rarely specified within the primary studies. A strength of this qualitative systematic review was the authors’ application of the Grade-CERQual assessment tool which provides guidance for how much confidence may be placed in findings from a systematic review (or evidence synthesis) involving qualitative research . Confidence in the included research studies ranged from low to high (by theme). Most themes generated by this synthesis were associated with ‘moderate’ level confidence or above, according to the authors’ application of GRADE-CerQual. The evidence synthesis ultimately led the review team to recommendations within which they ‘broadly have confidence’ . Patient and stakeholder experiences of care offer unique insight and rich data to inform our knowledge about DBS techniques and how they are perceived by patients and carers. It is incumbent upon researchers to ensure that qualitative studies are designed and reported appropriately, driving up quality and rigour. There is a need for more patient-centred qualitative research and for this to expand beyond DGA in children to include experiences of a wider scope of DBS techniques across all age groups.
|
Trans-oral Extra Tonsillar Approach of Styloidectomy for Treatment of Eagle’s Syndrome among Operated Cases of the Department of Otolaryngology-Head and Neck Surgery at a Tertiary Care Hospital: A Descriptive Cross-sectional Study | 7a3488db-299c-4a5b-8d46-365352bf717e | 9107829 | Otolaryngology[mh] | Although there is great variation in the normal length of the styloid process, it is found to be 20-30 mm in the majority of the patients. When it is longer than 30 mm, it is called elongated styloid process. An elongated styloid process or calcified stylohyoid ligament causing recurrent throat pain or foreign body sensation, dysphagia, or facial pain is known as Eagle's syndrome. - Eagle's syndrome is difficult to identify anatomically with limited clinical understanding. Elongated styloid process (ESP) has variable incidence i.e., 2-28%- and even lesser (4-10%) are symptomatic. Surgical removal of the styloid process to its normal limit via extraoral or intraoral technique is regarded as the best option. This study aims to find out the prevalence of trans-oral extra tonsillar approach of styloidectomy among the operated cases of Department of Otolaryngology-Head and Neck Surgery at a tertiary care hospital.
This is a descriptive cross-sectional study done among 1,475 patients who underwent surgery at the Department of Otolaryngology-Head and Neck Surgery of Kathmandu Medical College and Teaching Hospital, Kathmandu, Nepal between July 2018 and September 2020. Ethical approval was taken from the Institutional Review Committee of Kathmandu Medical College and Teaching Hospital (Reference number: 0106201802). Patients who had undergone surgery in the Department of Otolaryngology-Head and Neck Surgery of Kathmandu Medical College and Teaching Hospital were included and non-operative cases and those who denied surgery were excluded. Informed written consent was taken from the participants. A convenience sampling technique was used. The sample size was calculated by using the formula, n = Z 2 × p × q / e 2 = (1.96) 2 × 0.5 × 0.5 / (0.04) 2 = 600 where, n = sample size, Z = 1.96 at 95% Confidence Interval, p = prevalence of styloidectomy for maximum sample size, 50% = 0.5 q = 1-p e = margin of error, 4% As the sampling technique used was convenient sampling, we have doubled the calculated sample size, which becomes 1200. However, total of 1475 cases were taken in the study. Detailed history and thorough Ear, Nose, and Throat (ENT) examinations were done including Nasopharyngeal Laryngoscopy (NPL). Pain due to other factors such as temporomandibular, dental, orthopedic, and pharyngoesophageal causes were ruled out. The diagnosis was confirmed with a CT scan. Each side of the neck was taken as a separate entity. Under all aseptic conditions, all operations were performed under general anesthesia. Under aseptic precautions, nasotracheal intubation was done and a Boyle-Davis mouth gag was applied. Infiltration with 2% xylocaine with adrenaline was given just medial to the palatoglossal fold and anterior tonsillar pillar. A vertical incision ~2 cm long was made at the site of infiltration and the elongated styloid process was felt by palpation. The fibers of superior pharyngeal constrictor muscle were identified and the muscle fibers were split with the help of a blunt dissector. Finger dissection was used to expose the length of the styloid process. The stylohyoid ligament was cut and the styloid tip was engaged in a ring curette and sharp dissection was made superiorly towards the skull base to stripe the periosteum. The styloid process was removed with a bone nibbler and hemostasis was done. The mucosal incision was closed with one or two absorbable sutures. If the elongated, contralateral styloid process was removed similarly in the same setting. The sterile measuring tape was used to measure the length of the styloid process. They were evaluated in terms of duration of surgery, bleeding, post-operative pain, trismus, post operative infection, dysphagia, weakness of any nerve/sensory disturbances, and remission of symptoms. Visual analog score (0-10) was used to assess pain and dysphagia subjectively. The data were entered into Microsoft Excel and analyzed using the Statistical Package for the Social Sciences (SPSS) version 20. Point estimate at 95% confidence interval was calculated along with frequency and proportion for binary data.
Among the enrolled 1,475 patients, 24 (1.62%) patients (Confidence Interval = 0.97-2.26) underwent trans-oral extra tonsillar approach of styloidectomy among the operated cases at Department of Otolaryngology-Head and Neck Surgery at a tertiary care hospital. Among the 24 patients, 10 (41.7%) were males and 14 (58.3%) were females with the average age being 38 years. The length of the styloid process was found to be 36 mm (32 - 46 mm). Post-operative scores were consistently lower after styloidectomy across all individual symptoms. Dysphagia showed the most significant improvement from pre- to post-scores. Surgeries were uneventful and follow-up revealed patients to be symptom-free after surgery. The average operative time was 32 minutes (25-45 minutes). Postoperatively, three patients experienced moderate pain, trismus, and mild dysphagia in the first week. Two patients developed wound dehiscence at the suture site which healed secondarily. No other intra or post-operative surgery-related side effects like bleeding, retropharyngeal infection, or airway edema were observed. Wound healing was on time and there was no postoperative infection. No paresthesia of any nerve or significant fibrosis in the intraoral scar was noted.
Although styloid process elongation is not uncommon, true Eagle's syndrome is a rare disease. The majority of the patients with the elongated and mineralized styloid processes are asymptomatic and require no treatment. When symptoms do exist, the severity of symptoms is unrelated to the length or extent of the mineralization process. As a result, diagnosing Eagle's syndrome can be challenging and the differential diagnosis should include all the conditions causing cervicofacial pain including trigeminal, sphenopalatine, glossopharyngeal neuralgias, myofascial pain, mastoiditis, dental pain, chronic tonsillitis, pharyngitis, submandibular sialadenitis, pharyngeal foreign body, neoplasia, and migraine. The primary diagnostic guide for Eagle's syndrome is the patient's medical history. Validating the diagnosis involves palpation of the lateral tonsillar fossa, infiltration of local anesthesia into the tonsillar fossa combined with the radiological examination. Eagle's syndrome can be treated conservatively or surgically or both. Reassurance, analgesics, and local corticosteroid or anesthetic administration are all options for conservative treatment, but surgical shortening is the most rewarding and effective way to alleviate symptoms. , , Surgical management of ESP has been defined using a variety of transoral and extraoral cervical approaches, each with its own set of benefits and drawbacks. A noteworthy comparison between intraoral and extraoral surgical approaches was made by Strauss M, et al. and Chase DC, et al. Our present study shows that transoral, extra-tonsillar approach for styloidectomy is safe, easy to perform, quick, avoids external scar as well as extensive fascial dissection. Moreover, the recovery time of this procedure was short. Cai Y, et al. reported that postoperative pain after tonsil sparing styloidectomy was significantly lower after one week postoperatively. Three of our patients experienced moderate pain and trismus in the first week post-operatively, who underwent bilateral styloidectomy in the same setting. Since intraoral approaches may cause transient edema at the operation site, submandibular, and retromandibular regions, these brief post-operative complications can be considered common. In our study, there were no major intraoperative or postoperative complications and there was complete remission of symptoms of all the patients in 6 months follow up. Raychowdhary R, et al. also described an intraoral, extra-tonsillar approach without any complications.
The prevalence of trans-oral extra tonsillar approach of styloidectomy among the operated cases at the Department of Otolaryngology-Head and Neck Surgery is low in comparison to other studies done in similar settings. The most effective treatment of Eagle's syndrome is shortening or removal of the styloid process. A transoral, extra-tonsillar approach is a novel approach in terms of safety and adequacy to treat patients with a clinically and radiologically approved Eagle's syndrome without requiring the need for tonsillectomy.
|
Toll-Like Receptor 4 and 8 are Overexpressed in Lung Biopsies of Human Non-small Cell Lung Carcinoma | b088356f-4076-4654-a217-0600de8c5994 | 11872755 | Surgical Procedures, Operative[mh] | Lung cancer represents the leading cause of death from cancer in industrialized countries. It includes two main types of carcinoma namely non-small cell lung cancer (NSCLC) affecting epithelial cells, and small cell lung cancer (SCLC) that involves nervous cells or hormone secreting cells. NSCLC is the most representative lung cancer with 85% of total cases among the lung cancer patients. NSCLC is further divided into subtypes including adenocarcinoma (ADC), squamous cell carcinoma (LUSC), and large cell carcinoma. The most frequent among smokers is ADC . In cancer, inflammatory response represents a critical mechanism of the innate immune system whose activation provide tumor surveillance to identify and remove cancerous cells before they can cause further injury. It is well-known that innate immune response is initiated by Toll-like receptors (TLRs) whose role in the pathogenesis of cancer and tumor progression is widely debated . TLRs are a class of transmembrane proteins belonging to the pattern recognition receptors (PRRs) well-known for their ability to sense a variety of pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) . In humans, at least 10 members of TLR family have been identified, which are widely distributed and variably localized on the cell surface or in the membranes of intracellular endosomes . The expression of TLRs in resident lung cells as well as in infiltrating myeloid and lymphoid cells is documented . Their activation triggers an inflammatory response which orchestrates innate immunity so as to preserve tissue homeostasis, repair, and regeneration. However, since the prolonged TLR activation seems to be associated with increased risks of cancer and tumorigenesis, their role in cancer is currently controversial . The activation of TLRs signaling create an immunosuppressive microenvironment that promotes cell proliferation, tumor progression, invasion, and migration . However, TLRs can also induce apoptosis eliciting anti-tumor effect , so the precise role for the innate immune system in NSCLC is doubt since both pro- and anti-inflammatory responses can occur . Despite the conflicting findings, TLRs have recently gained great interest in lung cancer research, including NSCLC . About that, compared to control subjects, significant changes in the expression of TLR2, 3, 4, 7, 8, and 9 were found in peripheral blood cells and in lung tissues of NSCLC patients , while the overexpression of TLR1, 2, 4, and 9 was detected in the serum of NSCLC patients . Furthermore, increased expression of soluble TLR4 (sTLR4) was found to contribute to NSCLC development and was correlated with malignancy and poor survival . Here, we examined the gene and protein expression levels of TLR4 and 8 in isolated PBMC and in lung tissues in healthy subjects and in NSCLS (LUAD and LUSC) patients. We focused on TLR4 and 8 members based on previous bioinformatic study on the transcriptome signatures of TLRs family carried out in healthy, LUAD, and LUSC tissue samples.
Bioinformatic Study Datasets Expression data from the Gene Expression Omnibus (GEO) database of whole human genome arrays and the ArrayExpress Archive of Functional Genomics Data (ArrayExpress) , generated using the Affymetrix Human Genome U133 Plus 2.0 platform, were downloaded and processed through the Genevestigator V3 suite (NEBION AG, Zurich, Switzerland) . The microarray data in Genevestigator were normalized at two levels: robust multi-array average within experiments (using the Bioconductor package "affy" and a customized version of the package "affyExtensions") and trimmed mean adjustment to a target for normalization between datasets. For the latter, the trimmed mean is determined by calculating the mean of all the expression values in an experiment (across all samples) after excluding the top 5% and the bottom 5%. The combination of these two levels of normalization makes the data highly comparable across different experiments, thus allowing data pooling without further normalization. The Genevestigator database was queried in December 2021. We included in the analysis only the arrays for mRNA samples that (1) were not obtained by laser capture microdissection of single cells and (2) were not subjected to in vitro experimental treatments. We extracted and considered data from 1194 arrays of healthy and cancer tissues. The gene expression profile included data of TLR family members in the lung of healthy subjects (HSs) ( n = 120 from datasets HS-00017, HS-00217, HS-00554, HS-00571, HS-00576, HS-00649, HS-00826, HS-01187, HS-01269, and HS-01525), in LUAD ( n = 813 from datasets HS-00002, HS-00546, HS-00554, HS-00560, HS-00649, HS-00863, HS-01015, HS-01062, HS-01126, HS-01192, HS-01193, and HS-01196), and in LUSC ( n = 261 from datasets HS-00002, HS-00546, HS-00560, HS-00649, HS-00863, HS-01062, and HS-01126). In Silico Gene Expression Analysis Normalized gene expression data (expressed as log2 values) were downloaded from the Genevestigator V3 suite (NEBION AG, Zurich, Switzerland). TLR family ( TLR1 , TLR2 , TLR3 , TLR4 , TLR5 , TLR6 , TLR7 , TLR8 , TLR9, and TLR10 ) gene expression was analyzed in lung samples from HSs and in tumor tissue samples from LUAD and LUSC patients. Survival Analysis The Gene Expression Profiling Interactive Analysis (GEPIA2) server was employed to investigate the association between TLR gene expression and survival outcomes using gene expression data and corresponding survival information from the TCGA-LUAD ( n = 478) and TGCA-LUSC ( n = 482) series. Kaplan–Meier curves of overall survival of NSCLC patients were generated with the quartile group cutoff option. Patient Cohort In the present study, subjects from the Thoracic Surgery Unit, Department of Surgical Sciences, Santa Maria della Misericordia Hospital, University of Perugia Medical School were enrolled. Written informed consent for the use of blood and tissues along with clinical information for research purposes had been obtained from the donor in compliance with ethical and legal guidelines. We enrolled patients surgically treated for NSCLC with anatomical resections, and patients with benign non-inflammatory disease as control cases. This is a consecutive series of patients, prospectively enrolled between October 2021 and October 2022. Each patient underwent thorough pre-operative functional evaluation with spirometry, ECG, echocardiography, and emogasanalysis. Clinical staging was performed with total-body CT, PET-CT, brain CT and EBUS when indicated. Pre-operative diagnosis was obtained with endobronchial or transparietal biopsy. Patients with adequate functional condition and clinical stage I-IIIA tumor, according to the VIII TNM staging system, were submitted to anatomical lung resection (lobectomy/bi-lobectomy). None of the patients received neoadjuvant treatment. Patients with recurrent pneumothorax, or pneumothorax with persistent air leak, were treated with lung resection when areas of pulmonary alterations (blebs) were identified. mRNA Analysis of TLR 4 and 8 TLR4 and 8 were chosen based on significance conducted in previous bioinformatic study. A total of 40 samples were analyzed by qPCR: 31 cases and 9 control donors. 5 mL of peripheral whole blood were collected via venipuncture into PAXgene Blood RNA Tubes (Qiagen, Valencia, CA, cat. No. 762165) for simultaneous lyses of the blood cells and immediate stabilization of intracellular RNA. Total RNA was isolated and purified by PAXgene Blood RNA commercial kit (Qiagen, Valencia, CAcat. No. 762174). This protocol allowed to immobilize and prevent possible changes to the transcripts in order to obtain reliable gene expression data. RNA concentration and purity were determined using a Nanodrop spectrophotometer (Eppendorf, Amburg—Germany). All RNA samples were immediately stored at − 80 °C until use. The High Capacity cDNA Reverse Transcription Kit (catalogue n. 4368814, ThermoFisher, USA) was used for the cDNA synthesis, according to the manufacturer’s instructions. Per sample, 0.5 µg or 1.0 µg of RNA was used. All cDNA samples were immediately stored at − 80 °C until use. The final volume of RT reaction was 50 µL. TaqMan real-time PCR assays for TLR4 and 8 genes and two reference genes (β-actin and 18S) were selected from the Thermo Fisher Scientific catalogue (Hs01060206_m1 for TLR4; Hs07292888_s1 for TLR8; Hs 01060665_g1 for β-actin; Hs99999901_m1 for 18S). The reference genes were selected for their consistent expression levels in previous experiments conducted with human blood. All reactions were prepared using TaqMan™Gene Expression master mix (catalogue n. 4369016, Thermo Fisher Scientific, USA) and were run with an Applied Biosystem 7500 Real-Time PCR System. Per each reaction, 50 ng of cDNA was used, in a total volume of 20 µL. All samples were run in triplicate. A pre-cycling step (20 at 50 °C + 100 at 95 °C) followed by 40 amplification cycles (15″ at 95 °C + 10 at 60 °C + 10 at 65 °C), were used for all genes. A negative control and a standard curve were included in each plate. Efficiency was calculated generating a standard curve for each assay. Gene expression levels were calculated by using delta-delta Cq method. Data were statistically analyzed with a Mann–Whitney U test and a p value < 0.05 was considered significant. Immunohistochemical Analysis A consecutive series of 29 patients with primary operable non–small cell lung cancer was investigated. Histological subtype was assigned based on H&E slides, according to 2021 World Health Organization (WHO) classification for lung tumors. Nine patients, which underwent a surgical resection for non-neoplastic lung pathology, were recruited as controls. Surgical specimens were formalin-fixed (10% buffered formalin) and paraffin-embedded (FFPE). Sections of 4 µm were taken and placed on slides with a permanent positive charged surface, both to obtain the Hematoxylin and Eosin (H&E) stain and the Immunohistochemical (IHC) stains. The H&E stain was carried out using a Leica ST5020 Multistainer (Leica Microsystems), employing the kit ST Infinity H&E Staining System (Leica Biosystems). All the IHC stains (peroxidase immunoenzymatic reaction with development in diaminobenzinidine) were obtained by employing the BOND-III fully automated immunohistochemistry stainer (Leica Biosystems). For TLR4 immunohistochemical slides were carried out using a heat-induced antigen retrieval with the ready to use BondTM Epitope Retrieval Solution 2 (Leica Biosystems) for 20 min, followed by primary antibody incubation for 30’ with the TLR4 Monoclonal Antibody 76B357.1 (dilution 1:300, Invitrogen-ThermoFisher Scientific). For TLR-8, immunohistochemical slides were carried out using a heat-induced antigen retrieval with the ready to use BondTM Epitope Retrieval Solution 1 (Leica Biosystems, Catalog No: AR9961) for 20 min, followed by primary antibody incubation for 30’ with the TLR8 Monoclonal Antibody 44C143 (dilution 1:2000, Invitrogen-ThermoFisher Scientific). Appropriate negative and positive control slides were processed concurrently. The immunohistochemical stains for TLR-4 and TLR-8 were evaluated on neoplastic cells as intensity of the stain (evaluated as 0: absent; 1 + : mild; 2 + : moderate; 3 + : intense) and the percentage of the tumor cells labeled. The study protocol received the necessary approval from the Bioethics Committee at the Comitato Etico Aziende Sanitarie (CEAS), Umbria, code TREG001. Statistical Analysis Statistical analysis was conducted using Prism v.9.4.1 (GraphPad, San Diego, CA, USA). The Kolmogorov–Smirnov normality test was performed to analyze the distribution of data. p values were calculated using the ordinary one-way ANOVA (Tukey) test for normally distributed data and the Kruskal–Wallis (Dunn) test for data with skewed distributions. p values < 0.05 were considered statistically significant. Descriptive analysis for gene expression experiments were performed for all analyzed genes (TLR4, TLR8, and β-actin) showing main distribution parameters (mean, standard deviation, IQR). The expression levels of TLR4 and8 genes were normalized to the reference gene β-actin using the comparative Ct method. The Delta Ct (ΔCt) values were calculated by the difference between TLR data and corresponding β-actin data for both cases and controls. Subsequently, the Delta Delta Ct (ΔΔCt) values were determined to compare the expression levels between cases and controls and finally the fold change in gene expression was calculated using the formula 2 −ΔΔCT for both TLR4 and TLR8 genes. Then a two-sample t -test was conducted to compare the mean fold changes between TLR4 and TLR8. This test determines if there is a statistically significant difference between the two groups for both cases and controls. The Pearson correlation coefficient was calculated to assess the strength and direction of the linear relationship between the fold changes of TLR4 and TLR8. Finally, K-means clustering was applied to the fold change data of TLR4 and TLR8 to identify potential subgroups within the data. The number of clusters was set to 2, and the clustering was performed using the KMeans class from the sklearn.cluster module in python. The results were visualized using a scatter plot, where each point represents a sample, and the color indicates the cluster assignment. Analysis was performed with Rstudio (R version 4.3.2) (R Core Team (2023). _R: A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria. < https://www.R-project.org/ > .) and Python 3 (Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825–2830, 2011.).
Datasets Expression data from the Gene Expression Omnibus (GEO) database of whole human genome arrays and the ArrayExpress Archive of Functional Genomics Data (ArrayExpress) , generated using the Affymetrix Human Genome U133 Plus 2.0 platform, were downloaded and processed through the Genevestigator V3 suite (NEBION AG, Zurich, Switzerland) . The microarray data in Genevestigator were normalized at two levels: robust multi-array average within experiments (using the Bioconductor package "affy" and a customized version of the package "affyExtensions") and trimmed mean adjustment to a target for normalization between datasets. For the latter, the trimmed mean is determined by calculating the mean of all the expression values in an experiment (across all samples) after excluding the top 5% and the bottom 5%. The combination of these two levels of normalization makes the data highly comparable across different experiments, thus allowing data pooling without further normalization. The Genevestigator database was queried in December 2021. We included in the analysis only the arrays for mRNA samples that (1) were not obtained by laser capture microdissection of single cells and (2) were not subjected to in vitro experimental treatments. We extracted and considered data from 1194 arrays of healthy and cancer tissues. The gene expression profile included data of TLR family members in the lung of healthy subjects (HSs) ( n = 120 from datasets HS-00017, HS-00217, HS-00554, HS-00571, HS-00576, HS-00649, HS-00826, HS-01187, HS-01269, and HS-01525), in LUAD ( n = 813 from datasets HS-00002, HS-00546, HS-00554, HS-00560, HS-00649, HS-00863, HS-01015, HS-01062, HS-01126, HS-01192, HS-01193, and HS-01196), and in LUSC ( n = 261 from datasets HS-00002, HS-00546, HS-00560, HS-00649, HS-00863, HS-01062, and HS-01126). In Silico Gene Expression Analysis Normalized gene expression data (expressed as log2 values) were downloaded from the Genevestigator V3 suite (NEBION AG, Zurich, Switzerland). TLR family ( TLR1 , TLR2 , TLR3 , TLR4 , TLR5 , TLR6 , TLR7 , TLR8 , TLR9, and TLR10 ) gene expression was analyzed in lung samples from HSs and in tumor tissue samples from LUAD and LUSC patients. Survival Analysis The Gene Expression Profiling Interactive Analysis (GEPIA2) server was employed to investigate the association between TLR gene expression and survival outcomes using gene expression data and corresponding survival information from the TCGA-LUAD ( n = 478) and TGCA-LUSC ( n = 482) series. Kaplan–Meier curves of overall survival of NSCLC patients were generated with the quartile group cutoff option. Patient Cohort In the present study, subjects from the Thoracic Surgery Unit, Department of Surgical Sciences, Santa Maria della Misericordia Hospital, University of Perugia Medical School were enrolled. Written informed consent for the use of blood and tissues along with clinical information for research purposes had been obtained from the donor in compliance with ethical and legal guidelines. We enrolled patients surgically treated for NSCLC with anatomical resections, and patients with benign non-inflammatory disease as control cases. This is a consecutive series of patients, prospectively enrolled between October 2021 and October 2022. Each patient underwent thorough pre-operative functional evaluation with spirometry, ECG, echocardiography, and emogasanalysis. Clinical staging was performed with total-body CT, PET-CT, brain CT and EBUS when indicated. Pre-operative diagnosis was obtained with endobronchial or transparietal biopsy. Patients with adequate functional condition and clinical stage I-IIIA tumor, according to the VIII TNM staging system, were submitted to anatomical lung resection (lobectomy/bi-lobectomy). None of the patients received neoadjuvant treatment. Patients with recurrent pneumothorax, or pneumothorax with persistent air leak, were treated with lung resection when areas of pulmonary alterations (blebs) were identified.
Expression data from the Gene Expression Omnibus (GEO) database of whole human genome arrays and the ArrayExpress Archive of Functional Genomics Data (ArrayExpress) , generated using the Affymetrix Human Genome U133 Plus 2.0 platform, were downloaded and processed through the Genevestigator V3 suite (NEBION AG, Zurich, Switzerland) . The microarray data in Genevestigator were normalized at two levels: robust multi-array average within experiments (using the Bioconductor package "affy" and a customized version of the package "affyExtensions") and trimmed mean adjustment to a target for normalization between datasets. For the latter, the trimmed mean is determined by calculating the mean of all the expression values in an experiment (across all samples) after excluding the top 5% and the bottom 5%. The combination of these two levels of normalization makes the data highly comparable across different experiments, thus allowing data pooling without further normalization. The Genevestigator database was queried in December 2021. We included in the analysis only the arrays for mRNA samples that (1) were not obtained by laser capture microdissection of single cells and (2) were not subjected to in vitro experimental treatments. We extracted and considered data from 1194 arrays of healthy and cancer tissues. The gene expression profile included data of TLR family members in the lung of healthy subjects (HSs) ( n = 120 from datasets HS-00017, HS-00217, HS-00554, HS-00571, HS-00576, HS-00649, HS-00826, HS-01187, HS-01269, and HS-01525), in LUAD ( n = 813 from datasets HS-00002, HS-00546, HS-00554, HS-00560, HS-00649, HS-00863, HS-01015, HS-01062, HS-01126, HS-01192, HS-01193, and HS-01196), and in LUSC ( n = 261 from datasets HS-00002, HS-00546, HS-00560, HS-00649, HS-00863, HS-01062, and HS-01126).
Normalized gene expression data (expressed as log2 values) were downloaded from the Genevestigator V3 suite (NEBION AG, Zurich, Switzerland). TLR family ( TLR1 , TLR2 , TLR3 , TLR4 , TLR5 , TLR6 , TLR7 , TLR8 , TLR9, and TLR10 ) gene expression was analyzed in lung samples from HSs and in tumor tissue samples from LUAD and LUSC patients.
The Gene Expression Profiling Interactive Analysis (GEPIA2) server was employed to investigate the association between TLR gene expression and survival outcomes using gene expression data and corresponding survival information from the TCGA-LUAD ( n = 478) and TGCA-LUSC ( n = 482) series. Kaplan–Meier curves of overall survival of NSCLC patients were generated with the quartile group cutoff option.
In the present study, subjects from the Thoracic Surgery Unit, Department of Surgical Sciences, Santa Maria della Misericordia Hospital, University of Perugia Medical School were enrolled. Written informed consent for the use of blood and tissues along with clinical information for research purposes had been obtained from the donor in compliance with ethical and legal guidelines. We enrolled patients surgically treated for NSCLC with anatomical resections, and patients with benign non-inflammatory disease as control cases. This is a consecutive series of patients, prospectively enrolled between October 2021 and October 2022. Each patient underwent thorough pre-operative functional evaluation with spirometry, ECG, echocardiography, and emogasanalysis. Clinical staging was performed with total-body CT, PET-CT, brain CT and EBUS when indicated. Pre-operative diagnosis was obtained with endobronchial or transparietal biopsy. Patients with adequate functional condition and clinical stage I-IIIA tumor, according to the VIII TNM staging system, were submitted to anatomical lung resection (lobectomy/bi-lobectomy). None of the patients received neoadjuvant treatment. Patients with recurrent pneumothorax, or pneumothorax with persistent air leak, were treated with lung resection when areas of pulmonary alterations (blebs) were identified.
TLR4 and 8 were chosen based on significance conducted in previous bioinformatic study. A total of 40 samples were analyzed by qPCR: 31 cases and 9 control donors. 5 mL of peripheral whole blood were collected via venipuncture into PAXgene Blood RNA Tubes (Qiagen, Valencia, CA, cat. No. 762165) for simultaneous lyses of the blood cells and immediate stabilization of intracellular RNA. Total RNA was isolated and purified by PAXgene Blood RNA commercial kit (Qiagen, Valencia, CAcat. No. 762174). This protocol allowed to immobilize and prevent possible changes to the transcripts in order to obtain reliable gene expression data. RNA concentration and purity were determined using a Nanodrop spectrophotometer (Eppendorf, Amburg—Germany). All RNA samples were immediately stored at − 80 °C until use. The High Capacity cDNA Reverse Transcription Kit (catalogue n. 4368814, ThermoFisher, USA) was used for the cDNA synthesis, according to the manufacturer’s instructions. Per sample, 0.5 µg or 1.0 µg of RNA was used. All cDNA samples were immediately stored at − 80 °C until use. The final volume of RT reaction was 50 µL. TaqMan real-time PCR assays for TLR4 and 8 genes and two reference genes (β-actin and 18S) were selected from the Thermo Fisher Scientific catalogue (Hs01060206_m1 for TLR4; Hs07292888_s1 for TLR8; Hs 01060665_g1 for β-actin; Hs99999901_m1 for 18S). The reference genes were selected for their consistent expression levels in previous experiments conducted with human blood. All reactions were prepared using TaqMan™Gene Expression master mix (catalogue n. 4369016, Thermo Fisher Scientific, USA) and were run with an Applied Biosystem 7500 Real-Time PCR System. Per each reaction, 50 ng of cDNA was used, in a total volume of 20 µL. All samples were run in triplicate. A pre-cycling step (20 at 50 °C + 100 at 95 °C) followed by 40 amplification cycles (15″ at 95 °C + 10 at 60 °C + 10 at 65 °C), were used for all genes. A negative control and a standard curve were included in each plate. Efficiency was calculated generating a standard curve for each assay. Gene expression levels were calculated by using delta-delta Cq method. Data were statistically analyzed with a Mann–Whitney U test and a p value < 0.05 was considered significant.
A consecutive series of 29 patients with primary operable non–small cell lung cancer was investigated. Histological subtype was assigned based on H&E slides, according to 2021 World Health Organization (WHO) classification for lung tumors. Nine patients, which underwent a surgical resection for non-neoplastic lung pathology, were recruited as controls. Surgical specimens were formalin-fixed (10% buffered formalin) and paraffin-embedded (FFPE). Sections of 4 µm were taken and placed on slides with a permanent positive charged surface, both to obtain the Hematoxylin and Eosin (H&E) stain and the Immunohistochemical (IHC) stains. The H&E stain was carried out using a Leica ST5020 Multistainer (Leica Microsystems), employing the kit ST Infinity H&E Staining System (Leica Biosystems). All the IHC stains (peroxidase immunoenzymatic reaction with development in diaminobenzinidine) were obtained by employing the BOND-III fully automated immunohistochemistry stainer (Leica Biosystems). For TLR4 immunohistochemical slides were carried out using a heat-induced antigen retrieval with the ready to use BondTM Epitope Retrieval Solution 2 (Leica Biosystems) for 20 min, followed by primary antibody incubation for 30’ with the TLR4 Monoclonal Antibody 76B357.1 (dilution 1:300, Invitrogen-ThermoFisher Scientific). For TLR-8, immunohistochemical slides were carried out using a heat-induced antigen retrieval with the ready to use BondTM Epitope Retrieval Solution 1 (Leica Biosystems, Catalog No: AR9961) for 20 min, followed by primary antibody incubation for 30’ with the TLR8 Monoclonal Antibody 44C143 (dilution 1:2000, Invitrogen-ThermoFisher Scientific). Appropriate negative and positive control slides were processed concurrently. The immunohistochemical stains for TLR-4 and TLR-8 were evaluated on neoplastic cells as intensity of the stain (evaluated as 0: absent; 1 + : mild; 2 + : moderate; 3 + : intense) and the percentage of the tumor cells labeled. The study protocol received the necessary approval from the Bioethics Committee at the Comitato Etico Aziende Sanitarie (CEAS), Umbria, code TREG001.
Statistical analysis was conducted using Prism v.9.4.1 (GraphPad, San Diego, CA, USA). The Kolmogorov–Smirnov normality test was performed to analyze the distribution of data. p values were calculated using the ordinary one-way ANOVA (Tukey) test for normally distributed data and the Kruskal–Wallis (Dunn) test for data with skewed distributions. p values < 0.05 were considered statistically significant. Descriptive analysis for gene expression experiments were performed for all analyzed genes (TLR4, TLR8, and β-actin) showing main distribution parameters (mean, standard deviation, IQR). The expression levels of TLR4 and8 genes were normalized to the reference gene β-actin using the comparative Ct method. The Delta Ct (ΔCt) values were calculated by the difference between TLR data and corresponding β-actin data for both cases and controls. Subsequently, the Delta Delta Ct (ΔΔCt) values were determined to compare the expression levels between cases and controls and finally the fold change in gene expression was calculated using the formula 2 −ΔΔCT for both TLR4 and TLR8 genes. Then a two-sample t -test was conducted to compare the mean fold changes between TLR4 and TLR8. This test determines if there is a statistically significant difference between the two groups for both cases and controls. The Pearson correlation coefficient was calculated to assess the strength and direction of the linear relationship between the fold changes of TLR4 and TLR8. Finally, K-means clustering was applied to the fold change data of TLR4 and TLR8 to identify potential subgroups within the data. The number of clusters was set to 2, and the clustering was performed using the KMeans class from the sklearn.cluster module in python. The results were visualized using a scatter plot, where each point represents a sample, and the color indicates the cluster assignment. Analysis was performed with Rstudio (R version 4.3.2) (R Core Team (2023). _R: A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria. < https://www.R-project.org/ > .) and Python 3 (Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825–2830, 2011.).
In Silico Gene Expression Evaluation of TLRs The gene expression levels of TLR family receptors were assessed in 1194 lung tissue samples, including healthy lung tissue (HSs, n = 120), LUAD ( n = 813), and LUSC ( n = 261). The expression values of TLRs in individual samples are shown in Fig. . Our results show that, in both NSCLC subtypes, TLR1 , TLR4 , TLR5 , and TLR8 were downregulated, whereas TLR6 and TLR9 were upregulated. Conversely, TLR2 and TLR3 were downregulated in LUSC and not modulated in LUAD, while TLR7 was upregulated in LUAD and not modulated in LUSC. TLR10 was unchanged in both tumor types. Then, we evaluated the extent of modulation and considered biologically relevant modulations that cause at least a 40% shift in the average expression value in the tumor compared to HSs. Only four receptors met our criteria: TLR3 , TLR4 , TLR7 , and TLR8 . For subsequent analyses, we focused only on TLRs that showed modulation in both cancer subtypes. The two receptors meeting our criteria were TLR4 (−70% expression in LUAD and −71% expression in LUSC) and TLR8 (−42% expression in LUAD and −58% expression in LUSC) (Fig. ). TLR4 and TLR8 Expression Impact on Patient Survival Finally, we queried the TGCA database to assess whether the expression levels of TLR4 and TLR8 affected the overall survival of patients with LUAD ( n = 478) and LUSC ( n = 482). Using the GEPIA2 server, we generated Kaplan–Meier survival curves, comparing the top 25% of patients with the highest expression levels (first quartile) with the bottom 25% of patients exhibiting the lowest expression levels (fourth quartile). The results, indicate that low levels of TLR4 positively impact the survival of LUSC patients ( p = 0.025) (Fig. left panel). Additionally, data near statistical significance ( p = 0.056) suggest that high levels of TLR8 positively impact the survival of LUAD patients (Fig. right panel). Patients We enrolled 32 patients, 20 females and 22 males, with a mean age of 61.33 years (range 17–84). Of these, 32 patients were affected by NSCLC, whereas 10 were control cases, affected by spontaneous pneumothorax (9/10) and by congenital cystic malformation (1/10). Clinical and pathological characteristics of patients affected by NSCLC are summarized in Table . All patients were treated by anatomical resection: we recorded 30 lobectomies, one bi-lobectomy, and one sleeve-lobectomy. Minimally invasive access (Video-Assisted Thoracic Surgery, VATS) was chosen in 25/32 cases (78.12%), whereas thoracotomy was preferred in 7/32 (21.88%). Patients enrolled as control cases underwent VATS wedge resection; the only exception was the patient with congenital cystic malformation, who was submitted to thoracotomic lobectomy. Quantification of TLR4 and 8 mRNA Levels We collected data of TLR4, TLR8, and β-actin gene expression in PBMC of NSCLC cases and controls. Table presents the descriptive statistics for the three considered genes. The sample consists of 31 cases and 9 controls. For all genes, the mean gene expression appears to be higher in the cases group, whereas in the controls these genes are downregulated. Before comparing the genes, we calculated the fold change of TLR4 and TLR8 using β-actin as the housekeeping gene. Distribution of fold-change genes is reported in Fig. . Then, we performed a t -test to determine if there were significant differences among the genes. No significant differences were observed between TLR4 and TLR8 in cases ( t : − 0.318, p value: 0.752) and in controls ( t : − 0.239, p value: 0.812). In addition, no differences between cases and controls in TLR4 expression ( t : − 0.796, p value: 0.429), and TLR8 expression ( t : − 0.676, p value: 0.502) were observed (Fig. ). TLR4 and TLR8 show a significant correlation in their expression levels, with a Pearson correlation coefficient of 0.542 ( p value: 1.96 × 10 –3 ) in controls and 0.876 ( p value: 1.23 × 10 –5 ) in cases. The correlation between TLR4 and TLR8 demonstrates the presence of three possible gene expression clusters obtained by KMeans clustering algorithm applied to the fold change data for TLR4 and TLR8 genes (Fig. ). The points in the graph represent samples, colored according to their respective clusters. The x -axis represents the fold change of the TLR4 gene, while the y-axis represents the fold change of the TLR8 gene. The color bar indicates the different clusters identified by the KMeans algorithm . Immunohistochemical Analysis TLR4 and 8 were chosen based on significance in the previous bioinformatic study. IHC was used to detect the protein expression levels of TLR4 and 8 in the lung tissues of NSCLC cases and in non-tumor samples (controls). Detailed clinicopathological information for the patients included in the study is provided in Supplementary Table 1 in which we reported the staining intensity scored on a scale ranging from 0 to 3 + (0, no staining; 1 + , weak staining; 2 + , moderate staining; and 3 + , intense staining), and the percentage of TLR4/8 positive neoplastic cells in both NSCLC cases and controls. Based on this criteria, positive IHC staining was observed for TLR4 (4 cases/38; 1 LUSC and 3 ADC), and TLR8 with a more marked intensity for TLR8 (3 +) and a higher number of cases (6/38; 1 LUSC and 5 ADC). Compared with controls, the representative IHC showed a higher TLR4/8 intensity and a higher percentage of positive cells in NSCLC samples with a more marked trend for TLR8. Max staining intensity for TLR4 was equal to 2 + (Fig. ).
The gene expression levels of TLR family receptors were assessed in 1194 lung tissue samples, including healthy lung tissue (HSs, n = 120), LUAD ( n = 813), and LUSC ( n = 261). The expression values of TLRs in individual samples are shown in Fig. . Our results show that, in both NSCLC subtypes, TLR1 , TLR4 , TLR5 , and TLR8 were downregulated, whereas TLR6 and TLR9 were upregulated. Conversely, TLR2 and TLR3 were downregulated in LUSC and not modulated in LUAD, while TLR7 was upregulated in LUAD and not modulated in LUSC. TLR10 was unchanged in both tumor types. Then, we evaluated the extent of modulation and considered biologically relevant modulations that cause at least a 40% shift in the average expression value in the tumor compared to HSs. Only four receptors met our criteria: TLR3 , TLR4 , TLR7 , and TLR8 . For subsequent analyses, we focused only on TLRs that showed modulation in both cancer subtypes. The two receptors meeting our criteria were TLR4 (−70% expression in LUAD and −71% expression in LUSC) and TLR8 (−42% expression in LUAD and −58% expression in LUSC) (Fig. ).
Finally, we queried the TGCA database to assess whether the expression levels of TLR4 and TLR8 affected the overall survival of patients with LUAD ( n = 478) and LUSC ( n = 482). Using the GEPIA2 server, we generated Kaplan–Meier survival curves, comparing the top 25% of patients with the highest expression levels (first quartile) with the bottom 25% of patients exhibiting the lowest expression levels (fourth quartile). The results, indicate that low levels of TLR4 positively impact the survival of LUSC patients ( p = 0.025) (Fig. left panel). Additionally, data near statistical significance ( p = 0.056) suggest that high levels of TLR8 positively impact the survival of LUAD patients (Fig. right panel).
We enrolled 32 patients, 20 females and 22 males, with a mean age of 61.33 years (range 17–84). Of these, 32 patients were affected by NSCLC, whereas 10 were control cases, affected by spontaneous pneumothorax (9/10) and by congenital cystic malformation (1/10). Clinical and pathological characteristics of patients affected by NSCLC are summarized in Table . All patients were treated by anatomical resection: we recorded 30 lobectomies, one bi-lobectomy, and one sleeve-lobectomy. Minimally invasive access (Video-Assisted Thoracic Surgery, VATS) was chosen in 25/32 cases (78.12%), whereas thoracotomy was preferred in 7/32 (21.88%). Patients enrolled as control cases underwent VATS wedge resection; the only exception was the patient with congenital cystic malformation, who was submitted to thoracotomic lobectomy.
We collected data of TLR4, TLR8, and β-actin gene expression in PBMC of NSCLC cases and controls. Table presents the descriptive statistics for the three considered genes. The sample consists of 31 cases and 9 controls. For all genes, the mean gene expression appears to be higher in the cases group, whereas in the controls these genes are downregulated. Before comparing the genes, we calculated the fold change of TLR4 and TLR8 using β-actin as the housekeeping gene. Distribution of fold-change genes is reported in Fig. . Then, we performed a t -test to determine if there were significant differences among the genes. No significant differences were observed between TLR4 and TLR8 in cases ( t : − 0.318, p value: 0.752) and in controls ( t : − 0.239, p value: 0.812). In addition, no differences between cases and controls in TLR4 expression ( t : − 0.796, p value: 0.429), and TLR8 expression ( t : − 0.676, p value: 0.502) were observed (Fig. ). TLR4 and TLR8 show a significant correlation in their expression levels, with a Pearson correlation coefficient of 0.542 ( p value: 1.96 × 10 –3 ) in controls and 0.876 ( p value: 1.23 × 10 –5 ) in cases. The correlation between TLR4 and TLR8 demonstrates the presence of three possible gene expression clusters obtained by KMeans clustering algorithm applied to the fold change data for TLR4 and TLR8 genes (Fig. ). The points in the graph represent samples, colored according to their respective clusters. The x -axis represents the fold change of the TLR4 gene, while the y-axis represents the fold change of the TLR8 gene. The color bar indicates the different clusters identified by the KMeans algorithm .
TLR4 and 8 were chosen based on significance in the previous bioinformatic study. IHC was used to detect the protein expression levels of TLR4 and 8 in the lung tissues of NSCLC cases and in non-tumor samples (controls). Detailed clinicopathological information for the patients included in the study is provided in Supplementary Table 1 in which we reported the staining intensity scored on a scale ranging from 0 to 3 + (0, no staining; 1 + , weak staining; 2 + , moderate staining; and 3 + , intense staining), and the percentage of TLR4/8 positive neoplastic cells in both NSCLC cases and controls. Based on this criteria, positive IHC staining was observed for TLR4 (4 cases/38; 1 LUSC and 3 ADC), and TLR8 with a more marked intensity for TLR8 (3 +) and a higher number of cases (6/38; 1 LUSC and 5 ADC). Compared with controls, the representative IHC showed a higher TLR4/8 intensity and a higher percentage of positive cells in NSCLC samples with a more marked trend for TLR8. Max staining intensity for TLR4 was equal to 2 + (Fig. ).
In the present study, we analyzed the expression levels of TLR4 and TLR8 in PBMC and in tissue samples of patients with NSCLC and controls. Consistent with their roles in immune surveillance, TLR4 and TLR8 are expressed in non-malignant and malignant cells, especially in tissues exposed to the external environmental, such as lung and the gastrointestinal tract, where may influence the tumor cell survival and the resistance to apoptosis . The airway epithelial cells are the first barrier to counteract the entry of pathogens into the lung via TLR expression . TLRs are differentially expressed in airway epithelial cells on the membrane (TLR5) in the cytoplasm (TLR4, TLR8, and TLR9), or around the nucleus (TLR7) . In particular circumstances, TLR4 and TLR8 can be transferred to the cell surface for ligands recognition . The extracellular TLRs are known to be involved in the recognition of respiratory bacteria, viruses and host-derived factors and start innate and adaptive immune responses in air epithelial cells which can culminate with the activation of proinflammatory pathways contributing to the pathogenesis of lung diseases. Inflammatory diseases, such as chronic obstructive pulmonary disease (COPD), chronic bronchitis, asthma, pulmonary fibrosis, and acute respiratory distress syndrome (ARDS), may increase the risk of carcinogenesis because of the aberrant immunity occurring in respiratory epithelia that is mainly regulated by TLRs . To date, conflicting pro- and anti-tumor activities of TLR receptors in lung cancer have been described. In fact, while the activation of TLRs seems to preserve the tissue architecture and counteracts systemic inflammation, dysregulated inflammatory response occurring in tumorigenesis can lead to a prolonged TLR-mediate activation contributing to further tissue injury. Many evidence suggest a role of TLR activation in chronic inflammation and in lung cancer with the common denominator represented by NF-kB and connected effector pathways recruited following the TLR activation. For example, the activation of downstream MyD88 adaptor protein can serve crucial functions in tumorigenesis and tumor progression . Molecules released from damaged tissues, including pathogens components and tumor-associated antigens released in the tumor milieu, can act as ligands for TLRs and elicit downstream signaling pathways with consequent transcription of different genes coding proinflammatory cytokines which in turn activate subset of T cells in the lymph node that migrate to the tumor tissue . High levels of TLR2, TLR3, TLR7, and TLR9 have been found in BALF cells of NSCLC patients . In addition, elevated levels of TLR4 detected in patients with NSCLC have been correlated with tumor stage and metastasis, supporting the hypothesis for a critical role of TLRs in the onset and progression of NSCLC . In this study, we enrolled patients with NSCLC with an average age greater than 60 years and control individuals. The gene expression analysis shows no significant differences between TLR4 and TLR8 in NSCLC cases when compared with the controls. These results are in disagreement with the data obtained by an experimental study conducted in in silico methodology including 1194 lung tissue samples in which a significant downregulation of TLR4 (−70% expression in LUAD and −71% expression in LUSC), and TLR8 (−42% expression in LUAD and −58% expression in LUSC) was observed. The discrepancy between the expression levels of TLR4 and TLR8 and the data obtained from in silico analysis could be due to the different origins, since the gene expression was evaluated in PBMC while bioinformatic dataset was obtained from tumor tissues. Interestingly, the GEPIA2 server revealed that low levels of TLR4 positively impact the survival of LUSC patients and this data is in line with the existing literature reporting the correlation between high expression levels of TLR4 and unfavorable prognosis. Dysregulation in apoptotic proteins and in resistance to chemotherapy treatments could be associated to these findings . Furthermore, we found that LUAD patients with high levels of TLR8 exhibited improved survival outcomes with numbers near statistical significance. The positive correlation between the downregulation of TLR4 and survival might suggest a potential pro-tumoral role, while a better survival in patients with elevated expression of TLR8 could indicate an immune response activation of this TLR member. Obviously, given the limited number of cases it requires further investigations to confirm our results. It is also important to consider the profound differences between LUSC and LUAD subtypes in terms of growth, genomic profile, and clinical implications . In a recent study conducted by Smok-Kalwat and colleagues , high levels of TLR4 and TLR8 have been detected in the serum of NSCLC subjects in III and IV stages, suggesting an increase in TLRs expression in the advanced stages of the pathology. The presence of the soluble forms of TLR proteins in the serum confirms the notion that genes and protein expression can be differentially correlated in cancer . Interestingly, we found that TLR4 and TLR8 protein expression levels were higher in NSCLC than in controls as showed by the score of the IHC staining intensity and positive cells. TLR8 reached a more marked intensity (3+) compared with TLR4 (2+) in NSCLC samples. As the abundance of the TLR4 and TLR8 in tumor cells did not reflect the mRNA levels detected in PBMC we also speculate that local TLR4 and TLR8 may participate in the regulation of tumor growth in NSCLC patients. It is important to admit some limitations of this study, such as the low sample size, especially for IHC analysis, as well as the lack of a 5-year follow-up. The survival rate at 5 years is strongly reduced at stage III and IV. Limitations also concern the ethnicity of the patients included in this study, predominantly of White/Caucasian origin, the discrepancy between the mRNA levels and the protein content, and the diagnostic uselessness of the data. Despite this, our data support a possible role for TLR4 and TLR8 in increase overall survival which could prove helpful as prognostic biomarker in early stage of NSCLC. Of course, given the preliminary nature of the experiments, more detailed studies needed to be performed to understand the precise role of circulating TLRs and their impact in NSCLC and in the survival. During tissue injury and blood cell death, TLRs are released in the serum acting as potential biomarker for diagnosis. Therefore, although challenging the possibility to identify potential biomarkers in the early stages of disease can increase chances for successful treatment. Furthermore, preclinical and clinical studies highlighting the potential immunomodulatory efficacy of TLR agonists for cancer therapy represent a valuable approach as standalone molecules or adjuvants to immunotherapy and vaccines or in combination with conventional therapies . In conclusion, this study lay the foundations for future studies aimed to clarify the precise role of TLRs in NSCLC. As contrasting pro- and anti-tumor effects of TLRs seem to be associated with the countless variables that accompany the tumor features and microenvironment as well as the delivery systems, their identification as potential prognostic/diagnostic biomarkers of NSCLC and the designing of specific TLR agonists or antagonists represent a challenging task which could improve the treatment and the quality of life of the patients.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 127 kb)
|
Hippocampal and Reticulo-Thalamic Parvalbumin Interneurons and Synaptic Re-Organization during Sleep Disorders in the Rat Models of Parkinson’s Disease Neuropathology | d4723afa-456f-4289-8229-c8b4c3bd2b29 | 8396216 | Pathology[mh] | Parkinson’s disease (PD) is the second most common neurodegenerative disease that predominantly affects the motor system as a result of dopaminergic (DA) neuronal loss within the substantia nigra pars compacta (SNpc) . For decades, the clinical diagnosis of PD is dominantly based on identification of motor impairments such as bradykinesia, rigidity, resting tremor, and posture and gait problems . Unfortunately, these motor symptoms appear rather late, following a substantial nigrostriatal dopaminergic neurodegeneration. It has been estimated that at least 50–80% of the nigrostriatal dopaminergic innervation is lost before the onset of clinical parkinsonism . However, over the past few years, the concept of PD pathology has been changed: instead of being regarded as a motor disease induced by the selective loss of DA neurons within the SNpc, PD is now being recognized as a severe multisystem neurodegenerative disorder . It has been suggested that apart from the dysfunction of dopaminergic system underlying the motor impairments, many other brain systems also undergo the pathological alterations and contribute to the non-motor symptoms . Namely, the heterogeneous progression of PD neuropathology , with the clinical symptoms reflecting the localization and progression of underlying neuropathology , has been correlated to the gradual appearance of the non-motor dysfunctions, which are common, preclinical features of PD, linked to the initial (prodromal) stage of disease. Impaired olfaction, gastrointestinal abnormalities, sleep disturbances, visual alterations, and cognitive and mood disorders have been recognized as the non-motor manifestations that precede and accompany the classical motor impairments in PD . Sleep disorders are the most frequent prodromal symptom of PD, with a prevalence of 60–70% . Sleep-related symptoms of PD comprise a broad spectrum of conditions, which include insomnia, excessive daytime sleepiness , sleep fragmentation, restless leg syndrome , the difficulty in initiating and maintaining sleep, and the disturbances of rapid-eye movement (REM) sleep, such as REM sleep behavioral disorder (RBD) . These sleep disorders progressively worsen in the course of PD and represent a major factor affecting the patient’s life . Moreover, they are refractory to, or exacerbated by standard anti-parkinsonian medications, and they may promote the emergence of motor complications caused by a standard pharmacological therapy . Still, only RBD, as a parasomnia characterized by the loss of normal muscle atonia during REM sleep and dream enactment behavior, has shown to be an early predictor of the development of PD . In contrast to the well-defined pathophysiology of the motor impairments, the neuroanatomical and molecular substrates of PD non-motor symptoms, which are important in the early stages of PD, are far from clear. Current evidence suggests PD as a multisystem neurodegenerative syndrome where the neurotransmitters such as acetylcholine, serotonin, noradrenaline, glutamate, and gama-aminobutyric acid (GABA) play important pathophysiological roles . Moreover, at the cellular level, PD presents synaptopathy . Particularly, a dysregulation of the GABAergic system has received little attention, although the spectrum of non-motor symptoms might be linked to this pathway. GABA is the main inhibitory neurotransmitter in the central nervous system and a sleep-promoting neurotransmitter that is primarily released by the local interneurons to regulate cortical and subcortical microcircuits. GABAergic signaling modulates a wide range of physiological functions, including sleep, sensory perception, information processing, and cognition . There is evidence that the prodromal non-motor manifestations of PD are undoubtedly related to the downregulation of GABA neurotransmission . Recently, the GABAergic dysregulation has been observed in the basal ganglia of patients with PD along with the striatal dopaminergic axons co-release of GABA . In addition, it has been shown that GABA agonists relieve motor symptoms and protect dopaminergic cell bodies in the mice models of PD , as well as the RBD phenotype developed in the transgenic mice when the function of glycine and GABA receptors were impaired . Although the downregulation of two key markers of GABAergic cells (glutamic acid decarboxylase-67 (GAD67) and calcium-binding protein parvalbumin (PV)) was evidenced in the dorsolateral prefrontal cortex of PD patients, without cell loss, the role of GABAergic neurotransmission in premotor stages of PD has not been established yet, which needs to be elucidated in future studies . Our previous research suggests sleep disorders, particularly the REM sleep disorders, as the possible functional biomarkers of neurodegeneration that are relevant to PD and as the biomarkers of an earlier aging onset in the brain with neurodegeneration vs. physiological (healthy) brain. Namely, we previously evidenced in our rat model of PD cholinopathy (the bilateral lesion of the pedunculopontine tegmental nucleus (PPT); PPT lesion) the topographically differently expressed EEG microstructures within the sensorimotor and motor cortex during non-rapid eye movement (NREM) and REM sleep, alongside the appearance of two REM sleep states, particularly within the motor cortex. Distinct REM states were differential with regard to the EEG microstructures, the electromyographic (EMG) power, and the sensorimotor and motor cortical drives to the dorsal nuchal muscles . These altered cortical drives were commonly expressed during both REM states as the impaired beta oscillation drive , but the sensorimotor cortical drive was altered more severely during “healthy” REM (REM with atonia, theta REM) than during the pathological REM sleep (REM without atonia, sigma REM). In addition, the hallmarks of an earlier aging onset during the PD cholinopathy were consistently expressed through the EEG sigma amplitude augmentation during REM sleep, as a unique and pathological REM sleep phenomenon, alongside the broadly altered motor cortical drive during NREM and REM sleep . This pathological REM state has been shown to be the REM sleep “enriched” with sleep spindles, which is a unique phenomenon and a possible biomarker of earlier aging onset in the rat model of PD cholinopathy , and it suggests a disorder at the thalamo-cortical and hippocampal level. Namely, an impaired cholinergic innervation was expressed earlier as a sleep disorder than as a movement disorder; it was the earliest and long-lasting at the hippocampal and thalamo-cortical level, and it was followed by a delayed hypokinesia . Overall, this study suggested that regarding how they occurred, the hippocampal NREM sleep disorder, an altered high voltage sleep spindle dynamics during REM sleep in the hippocampus and motor cortex, and hypokinesia may serve as the biomarkers of PD cholinopathy onset and progression . In addition, our results in the animal model of PD cholinopathy are in accordance with the imaging studies in humans which demonstrated the thalamic cholinergic denervation in the Parkinsonian disorders with or without dementia and suggested that the neurodegenerative involvement of thalamic cholinergic afferent projections, arising from the PPT, may contribute to the disease specific motor and cognitive abnormalities . Moreover, our previous studies in the rat models of hemiparkinsonism (the unilateral SNpc lesion; SNpc lesion; or combined unilateral SNpc lesion and bilateral PPT lesion; SNpc/PPT lesion) provided novel evidence for the importance of the SNpc dopaminergic innervation in sleep regulation, theta rhythm generation, and sleep spindle dynamics control, along with the importance of REM sleep regulatory substrate for sleep spindle generation and the cortico-hippocampal synchronizations of EEG oscillations . Furthermore, in the rat models of hemiparkinsonism (SNpc lesioned rats and SNpc/PPT lesioned rats), we evidenced impaired spatial memory abilities followed by the severe hippocampal prodromal sleep disorders, which were expressed as the sleep fragmentation and distinct NREM/REM EEG microstructure alterations vs. the motor cortex; the opposite regulatory role of the dopaminergic vs. cholinergic control of the NREM delta and beta oscillation amplitudes in the hippocampus; and the important role of REM neurochemical substrate in the dopaminergic control of beta oscillations . In addition, we recently demonstrated the brain structure-related and NREM/REM sleep-related heterogeneity of the simultaneous and non-simultaneous motor cortical and hippocampal local sleep in control rats , suggesting the importance of both the local neuronal network substrate and the NREM/REM neurochemical substrate in the control mechanisms of sleep in physiological as well as in any pathological condition . Based on all the above-mentioned studies, we hypothesize that the distinct sleep disorders during distinct PD neuropathology could be the useful biomarkers of onset and progression follow-up of the neurodegenerative processes in humans, and that the hippocampal GABAergic system plays an important pathophysiological role. Therefore, we aimed this study to further investigate the local sleep disorders from functional (sleep architecture, EEG microstructure of all sleep states, all sleep states episode dynamics and their EEG oscillations, sleep spindles, locomotor activity, and spatial memory abilities) to cellular (local GABAergic interneurons) and molecular (microtubule-associated protein 2 (MAP2) and postsynaptic density protein 95 (PSD-95)) levels in rats following the bilateral PPT lesion (rats with PD cholinopathy), unilateral SNpc lesion (hemiparkinsonian rats), and combined unilateral SNpc/bilateral PPT lesion (hemiparkinsonian rats with PD cholinopathy). 2.1. Alterations of the Hippocampal PV+ Interneurons in the Distinct Rat Models of PD Neuropathology In this study, in order to investigate the cellular substrate of the local (hippocampal) prodromal sleep disorders in the distinct rat models of PD neuropathology, we quantified the PV+ interneurons of dentate gyrus (DG) by using the hippocampal sections from our previous study, where we have already verified and quantified the lesions: the bilateral PPT lesion (PD cholinopathy), the unilateral SNpc lesion (hemiparkinsonism), and the combined bilateral PPT/unilateral SNpc lesion (hemiparkinsonism with PD cholinopathy) . As previously reported, the PPT cholinergic deficit in the bilaterally PPT lesioned rats (the rat model of PD cholinopathy) was higher than 20% throughout the overall PPT antero-posterior dimension, and the dopaminergic deficit in the SNpc lesioned rats (the rat model of hemiparkinsonism) was higher than 60% throughout the overall SNpc antero-posterior dimension . Furthermore, in the SNpc/PPT lesioned rats (the rat model of hemiparkinsonism with PD cholinopathy), the PPT cholinergic deficit was the same as in the model of PD cholinopathy, except for the small difference in the middle of this structure, and the SNpc dopaminergic deficit was higher than 48% throughout the overall SNpc antero-posterior dimension, with the highest deficit posteriorly (>72%) . In our present study, we quantified the number of PV+ interneurons of the DG throughout the overall hippocampal antero-posterior dimension per each brain side of each rat in each experimental group. We used three defined stereotaxic ranges: −1.50–3.00 mm; −3.10–4.60 mm; and −4.70–6.20 mm posterior from bregma ( and ). While there was no alteration in the number of PV+ interneurons within DG in the hemiparkinsonian rats ( , SNpc lesion; n = 4; z ≥ −1.01, p ≥ 0.34) versus the controls (Control; n = 6), there was a significantly reduced number of PV+ interneurons in DG throughout the overall antero-posterior hippocampal dimension in the hemiparkinsonian rats with PD cholinopathy ( , SNpc/PPT lesion; n = 8; z ≥ −2.71, p ≤ 0.02). In contrast, we evidenced the increased number of PV+ interneurons in DG from −1.50 to −4.60 mm posterior from bregma in the rats with PD cholinopathy ( , PPT lesion; n = 4; z ≥ −2.05, p = 0.04). The typical individual examples of PV immunostaining of each experimental group versus control, and throughout the overall DG antero-posterior dimension, are depicted in . Furthermore, we correlated the number of PV+ interneurons in DG with the SNpc dopaminergic or PPT cholinergic deficits in each rat model of PD . In contrast to the SNpc dopaminergic deficit that showed no correlation with the number of hippocampal PV+ interneurons in the hemiparkinsonian rats ( B, r = 0.32, p = 0.20 for the SNpc lesion ( n = 4); r = −0.10, p = 0.69 for the SNpc/PPT lesion ( n = 7)), the PPT cholinergic deficit was positively correlated with the number of hippocampal PV+ interneurons in the rats with PD cholinopathy ( A, PPT lesion ( n = 4); r = 0.52, p = 10 −5 ), as well as in the hemiparkinsonian rats with PD cholinopathy ( A, SNpc/PPT lesion ( n = 6); r = 0.29, p = 0.05). 2.2. Impact of the Hippocampal PV+ Interneurons on Local Sleep Architecture and NREM/REM EEG Microstructure in the Distinct Rat Models of PD Neuropathology In our previous study, we demonstrated the hippocampal sleep fragmentation (increased number of Wake episodes) in the rat models of hemiparkisonism and hemiparkinsonism with PD cholinopathy . In this study, we did not find any functional coupling (correlation) between the number of the PV+ interneurons in DG and the hippocampal Wake/NREM/REM state duration or their episode number/duration in the control rats ( p ≥ 0.32; data). On the other hand, we demonstrated the functional coupling between the altered number of PV+ interneurons in DG and Wake episodes duration only in the hemiparkinsonian rats with PD cholinopathy ( , SNpc/PPT lesion ( n = 7); r = 0.52, p = 0.01) in contrast to all other experimental groups ( , Control ( n = 3); PPT lesion ( n = 3); SNpc lesion, ( n = 3); r ≥ −0.25, p ≥ 0.36). Namely, as much as the number of PV+ interneurons was reduced in DG of the hemiparkinsonian rats with PD cholinopathy (SNpc/PPT lesion), the duration of hippocampal Wake episodes was shorter . Furthermore, we did not evidence any functional coupling between the number of the PV+ interneurons in DG and the hippocampal NREM/REM delta, theta, sigma, beta and gamma relative amplitudes, neither in control rats ( p ≥ 0.60 for NREM; p ≥ 0.33 for REM), nor in any PD model ( p ≥ 0.37 for NREM; p ≥ 0.37 for REM). 2.3. Sleep Spindle Dynamics in the Distinct Rat Models of PD Neuropathology Our present results demonstrate that the sleep spindles (SSs) occurred during NREM and REM sleep in control rats only within the motor cortex ( and , Control). PD cholinopathy induced SSs occurrence during NREM sleep within the hippocampus ( , PPT lesion), as well as HVSs occurrence, both within the motor cortex and hippocampus during NREM and REM sleep ( , PPT lesion). These induced motor cortical and hippocampal HVSs were more dense and longer during REM sleep ( A, , PPT lesion; z ≥ −5.99, p ≤ 10 −3 for the motor cortex; z ≥ −3.69, p ≤ 10 −3 for the hippocampus). Conversely, the motor cortical SSs became longer but sparse during REM sleep ( , PPT lesion; z ≥ −2.42, p = 0.02). Hemiparkinsonism induced the SSs occurrence during hippocampal NREM sleep ( , SNpc lesion), but it induced the HVSs occurrence only during REM sleep within both the motor cortex and hippocampus ( , SNpc lesion). Hemiparkinsonism with PD cholinopathy induced SSs occurrence during hippocampal REM sleep ( , SNpc/PPT lesion), HVSs occurrence during both NREM and REM sleep in the motor cortex, and also HVSs occurrence only during REM sleep in the hippocampus ( , SNpc/PPT lesion). SSs and HVSs are always slower oscillations within the hippocampus than in the motor cortex ( and ; z ≥ −3.04, p ≤ 0.02), apart from hemiparkinsonism with PD cholinopathy ( and ; z = −1.69, p = 0.09). Particularly, the inter-structure difference of mean HVS frequency during REM sleep was abolished in the hemiparkinsonian rats with PD cholinopathy, which was due to an increased HVS frequency in the hippocampus ( B, ; z = −3.92, p = 10 −4 ). Our results provide evidence for REM sleep as a predisposing state for HVSs induction in all experimental models of PD, particularly in the hemiparkinsonian models (SNpc lesion and SNpc/PPT lesion). Moreover, PD cholinopathy prolongs HVS and SS duration and increases the density of induced HVSs, particularly during REM sleep ( , A). Correlations between the parameters of HVS dynamic (nHVS, HVS Density/h (1/min), HVSdur/h (min), HVSdur (s), HVSf (Hz)), which were dominantly induced during hippocampal REM sleep in all experimental models of PD, with the number of PV+ interneurons in the DG, did not show any functional coupling ( p ≥ 0.56), apart from a tendency for the hippocampal HVS duration shortening, due to the reduced number of PV+ interneurons in the DG of the hemiparkinsonian rats with PD cholinopathy (r = 0.35, p = 0.09). 2.4. Impact of the Hippocampal PV+ Interneurons on the Spatial Memory Abilities (Habitual Response) in the Distinct Rat Models of PD We previously evidenced a lack of a physiological habitual response (the impaired spatial memory abilities) in both groups of the hemiparkinsonian rats, but not in the rats with PD cholinopathy . Here, in this study, we correlated the number of PV+ interneurons in DG with locomotor activity (distance traveled during habitual response) in all experimental groups . In contrast to the PD cholinopathy and hemiparkinsonism, as well as the control condition, where we did not evidence the correlation of PV+ interneurons and physiological habitual response ( , Control, ( n = 6); r = −0.08, p = 0.40; PPT lesion, ( n = 4); r = 0.05, p = 0.69) or pathological habitual response (an impaired spatial memory abilities; , SNpc lesion, ( n = 4); r = −0.01, p = 0.96), there was the positive correlation between the number of PV+ interneurons and habitual response only in the hemiparkinsonian rats with PD cholinopathy ( , SNpc/PPT lesion, ( n = 8); r = 0.36, p = 10 −5 ). Since these rats have a reduced mean number of PV+ interneurons (see ), and their spatial memory abilities are impaired (increased locomotor activity over three consecutive days versus the physiological habitual response or decreased locomotor activity over three days; see Petrovic et al. ), this positive correlation only indicates a possible important role of the hippocampal PV+ interneurons in the impaired spatial memory abilities in this experimental group of rats. 2.5. Hippocampal Synaptic Re-Organization and the PV+ Interneurons Alteration in the Distinct Rat Models of PD Neuropathology To further explore the synaptic re-organization and the alteration of hippocampal PV+ interneurons, we investigated the hippocampal MAP2 and PSD-95 expression—but only in those experimental groups of rats with the opposite effect on PV+ interneurons expression in DG. Therefore, the hippocampal MAP2 and PSD-95 expression was followed in the control rats, the rats with PD cholinopathy, and the hemiparkinsonian rats with PD cholinopathy. 2.5.1. Hippocampal PV+ Interneurons Alteration and MAP2 Expression in the Distinct Rat Models of PD Neuropathology Our results show that the PD cholinopathy increased the number of PV+ interneurons within DG and suppressed MAP2 expression, particularly in the DG granular and polymorphic cells layers, and in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , PPT lesion). Conversely, the hemiparkinsonism with PD cholinopathy reduced the number of PV+ interneurons within DG and enhanced MAP2 expression in the DG granular and polymorphic cell layers, and in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , SNpc/PPT lesion). 2.5.2. Hippocampal PV+ Interneurons Alteration and PSD-95 Expression in the Distinct Rat Models of PD Neuropathology In addition to suppressed MAP2 expression and an increased number of PV+ interneurons within DG, the PD cholinopathy also suppressed the overall hippocampal PSD-95 expression. The suppression of PSD-95 immunoreactivity was particularly evidenced in the granular cell layer and molecular layer of DG, as well as in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , PPT lesion). On the other hand, the hemiparkinsonism with PD cholinopathy reduced the number of PV+ interneurons in DG and enhanced PSD-95 expression—but only within the granular cell layer and molecular layer of DG ( , and , SNpc/PPT lesion). 2.6. Alteration of the PV+ Interneurons in the Reticulo-Thalamic Nucleus (RT) in the Distinct Rat Models of PD Neuropathology In order to investigate the cellular substrate for the induced and distinctly altered sleep spindle dynamics in the distinct models of PD neuropathology, particularly the HVSs dynamic during hippocampal REM sleep, we also followed the alteration of PV+ interneurons of RT, which are important in sleep spindle generation. We demonstrate the suppression of PV+ interneurons expression in RT of the hemiparkinsonian rats with PD cholinopathy, as well as in the hippocampal DG ( , SNpc/PPT lesion). Although we did not quantify the number of PV+ interneurons of RT, besides a suppression of the PV+ interneurons expression in almost all the SNpc/PPT lesioned rats ( n = 7/8), there were also the obvious defects in the dorsal or ventral part of RT ( n = 3/7), immunostained with PV, on the brain side ipsilateral to the combined SNpc and PPT lesions (always the right brain side) vs. the contralateral brain side, the control rats, the bilateral PPT lesioned rats, and the SNpc lesioned rats . The overall antero-posterior dimensions of these defects in PV immunostainings were from 0.64 to 1.84 mm. depicts the typical individual examples of RT PV immunostaining per each experimental group of rats ( A), with the typical example of an overall antero-posterior dimensions of the dorsal and ventral defects within RT ( Bb, Bd, and Bf) vs. the contralateral RT ( Ba, Bc, and Be). In this typical example of the suppressed PV+ interneurons expression within RT in the hemiparkinsonian rat with PD cholinopathy, the defect of PV immunostaining was spread from −2.07 to −3.91 mm posterior from bregma (overall the antero-posterior defect of PV immunostaining was 1.84 mm). 2.7. Synaptic Re-Organization of RT and the PV+ Interneurons Alteration in the Distinct Rat Models of PD Neuropathology In parallel to the hippocampal synaptic re-organization, we also investigated MAP2 and PSD-95 expression within RT in the control rats, the rats with PD cholinopathy, and the hemiparkinsonian rats with PD cholinopathy. 2.7.1. PV+ Interneurons Alteration of RT and MAP2 Expression in the Distinct Rat Models of PD Neuropathology We evidenced that the hemiparkinsonism with PD cholinopathy reduced PV+ interneurons expression within RT and potentiated MAP2 expression, ipsilaterally to the combined SNpc and PPT lesions (the right brain side) likewise in the hippocampus, and versus the RT of contralateral brain side (left side with only the PPT lesion), as well as versus the control rats, and the bilaterally PPT lesioned rats ( , SNpc/PPT lesion). In addition, MAP2 expression was partially enhanced in the dorsal part of RT in the rats with PD cholinopathy ( , PPT lesion). The typical examples of MAP2 expression within the RT of the experimental groups of rats with the opposite effect on PV+ interneurons expression in the hippocampal DG and RT (PPT lesion and SNpc/PPT lesion) and versus controls (Control) are depicted in A. 2.7.2. PV+ Interneurons Alteration of RT and PSD-95 Expression in the Distinct Rat Models of PD Neuropathology Although there was no alteration of the number of PV+ interneurons and MAP2 expression was partially increased (see ), there was the suppression of PSD-95 expression in RT of the rats with PD cholinopathy ( , PPT lesion). On the other hand, there was the reduced PV+ interneurons expression within RT of the hemiparkinsonian rats with PD cholinopathy along with MAP2 potentiation, and PSD-95 expression was similar to the control level ( , SNpc/PPT lesion). Typical examples of PSD-95 expression within RT of the experimental groups of rats with the opposite effect on PV+ interneurons expression in the hippocampal DG and RT (PPT lesion and SNpc/PPT lesion) and versus controls (Control) are depicted in B. In this study, in order to investigate the cellular substrate of the local (hippocampal) prodromal sleep disorders in the distinct rat models of PD neuropathology, we quantified the PV+ interneurons of dentate gyrus (DG) by using the hippocampal sections from our previous study, where we have already verified and quantified the lesions: the bilateral PPT lesion (PD cholinopathy), the unilateral SNpc lesion (hemiparkinsonism), and the combined bilateral PPT/unilateral SNpc lesion (hemiparkinsonism with PD cholinopathy) . As previously reported, the PPT cholinergic deficit in the bilaterally PPT lesioned rats (the rat model of PD cholinopathy) was higher than 20% throughout the overall PPT antero-posterior dimension, and the dopaminergic deficit in the SNpc lesioned rats (the rat model of hemiparkinsonism) was higher than 60% throughout the overall SNpc antero-posterior dimension . Furthermore, in the SNpc/PPT lesioned rats (the rat model of hemiparkinsonism with PD cholinopathy), the PPT cholinergic deficit was the same as in the model of PD cholinopathy, except for the small difference in the middle of this structure, and the SNpc dopaminergic deficit was higher than 48% throughout the overall SNpc antero-posterior dimension, with the highest deficit posteriorly (>72%) . In our present study, we quantified the number of PV+ interneurons of the DG throughout the overall hippocampal antero-posterior dimension per each brain side of each rat in each experimental group. We used three defined stereotaxic ranges: −1.50–3.00 mm; −3.10–4.60 mm; and −4.70–6.20 mm posterior from bregma ( and ). While there was no alteration in the number of PV+ interneurons within DG in the hemiparkinsonian rats ( , SNpc lesion; n = 4; z ≥ −1.01, p ≥ 0.34) versus the controls (Control; n = 6), there was a significantly reduced number of PV+ interneurons in DG throughout the overall antero-posterior hippocampal dimension in the hemiparkinsonian rats with PD cholinopathy ( , SNpc/PPT lesion; n = 8; z ≥ −2.71, p ≤ 0.02). In contrast, we evidenced the increased number of PV+ interneurons in DG from −1.50 to −4.60 mm posterior from bregma in the rats with PD cholinopathy ( , PPT lesion; n = 4; z ≥ −2.05, p = 0.04). The typical individual examples of PV immunostaining of each experimental group versus control, and throughout the overall DG antero-posterior dimension, are depicted in . Furthermore, we correlated the number of PV+ interneurons in DG with the SNpc dopaminergic or PPT cholinergic deficits in each rat model of PD . In contrast to the SNpc dopaminergic deficit that showed no correlation with the number of hippocampal PV+ interneurons in the hemiparkinsonian rats ( B, r = 0.32, p = 0.20 for the SNpc lesion ( n = 4); r = −0.10, p = 0.69 for the SNpc/PPT lesion ( n = 7)), the PPT cholinergic deficit was positively correlated with the number of hippocampal PV+ interneurons in the rats with PD cholinopathy ( A, PPT lesion ( n = 4); r = 0.52, p = 10 −5 ), as well as in the hemiparkinsonian rats with PD cholinopathy ( A, SNpc/PPT lesion ( n = 6); r = 0.29, p = 0.05). In our previous study, we demonstrated the hippocampal sleep fragmentation (increased number of Wake episodes) in the rat models of hemiparkisonism and hemiparkinsonism with PD cholinopathy . In this study, we did not find any functional coupling (correlation) between the number of the PV+ interneurons in DG and the hippocampal Wake/NREM/REM state duration or their episode number/duration in the control rats ( p ≥ 0.32; data). On the other hand, we demonstrated the functional coupling between the altered number of PV+ interneurons in DG and Wake episodes duration only in the hemiparkinsonian rats with PD cholinopathy ( , SNpc/PPT lesion ( n = 7); r = 0.52, p = 0.01) in contrast to all other experimental groups ( , Control ( n = 3); PPT lesion ( n = 3); SNpc lesion, ( n = 3); r ≥ −0.25, p ≥ 0.36). Namely, as much as the number of PV+ interneurons was reduced in DG of the hemiparkinsonian rats with PD cholinopathy (SNpc/PPT lesion), the duration of hippocampal Wake episodes was shorter . Furthermore, we did not evidence any functional coupling between the number of the PV+ interneurons in DG and the hippocampal NREM/REM delta, theta, sigma, beta and gamma relative amplitudes, neither in control rats ( p ≥ 0.60 for NREM; p ≥ 0.33 for REM), nor in any PD model ( p ≥ 0.37 for NREM; p ≥ 0.37 for REM). Our present results demonstrate that the sleep spindles (SSs) occurred during NREM and REM sleep in control rats only within the motor cortex ( and , Control). PD cholinopathy induced SSs occurrence during NREM sleep within the hippocampus ( , PPT lesion), as well as HVSs occurrence, both within the motor cortex and hippocampus during NREM and REM sleep ( , PPT lesion). These induced motor cortical and hippocampal HVSs were more dense and longer during REM sleep ( A, , PPT lesion; z ≥ −5.99, p ≤ 10 −3 for the motor cortex; z ≥ −3.69, p ≤ 10 −3 for the hippocampus). Conversely, the motor cortical SSs became longer but sparse during REM sleep ( , PPT lesion; z ≥ −2.42, p = 0.02). Hemiparkinsonism induced the SSs occurrence during hippocampal NREM sleep ( , SNpc lesion), but it induced the HVSs occurrence only during REM sleep within both the motor cortex and hippocampus ( , SNpc lesion). Hemiparkinsonism with PD cholinopathy induced SSs occurrence during hippocampal REM sleep ( , SNpc/PPT lesion), HVSs occurrence during both NREM and REM sleep in the motor cortex, and also HVSs occurrence only during REM sleep in the hippocampus ( , SNpc/PPT lesion). SSs and HVSs are always slower oscillations within the hippocampus than in the motor cortex ( and ; z ≥ −3.04, p ≤ 0.02), apart from hemiparkinsonism with PD cholinopathy ( and ; z = −1.69, p = 0.09). Particularly, the inter-structure difference of mean HVS frequency during REM sleep was abolished in the hemiparkinsonian rats with PD cholinopathy, which was due to an increased HVS frequency in the hippocampus ( B, ; z = −3.92, p = 10 −4 ). Our results provide evidence for REM sleep as a predisposing state for HVSs induction in all experimental models of PD, particularly in the hemiparkinsonian models (SNpc lesion and SNpc/PPT lesion). Moreover, PD cholinopathy prolongs HVS and SS duration and increases the density of induced HVSs, particularly during REM sleep ( , A). Correlations between the parameters of HVS dynamic (nHVS, HVS Density/h (1/min), HVSdur/h (min), HVSdur (s), HVSf (Hz)), which were dominantly induced during hippocampal REM sleep in all experimental models of PD, with the number of PV+ interneurons in the DG, did not show any functional coupling ( p ≥ 0.56), apart from a tendency for the hippocampal HVS duration shortening, due to the reduced number of PV+ interneurons in the DG of the hemiparkinsonian rats with PD cholinopathy (r = 0.35, p = 0.09). We previously evidenced a lack of a physiological habitual response (the impaired spatial memory abilities) in both groups of the hemiparkinsonian rats, but not in the rats with PD cholinopathy . Here, in this study, we correlated the number of PV+ interneurons in DG with locomotor activity (distance traveled during habitual response) in all experimental groups . In contrast to the PD cholinopathy and hemiparkinsonism, as well as the control condition, where we did not evidence the correlation of PV+ interneurons and physiological habitual response ( , Control, ( n = 6); r = −0.08, p = 0.40; PPT lesion, ( n = 4); r = 0.05, p = 0.69) or pathological habitual response (an impaired spatial memory abilities; , SNpc lesion, ( n = 4); r = −0.01, p = 0.96), there was the positive correlation between the number of PV+ interneurons and habitual response only in the hemiparkinsonian rats with PD cholinopathy ( , SNpc/PPT lesion, ( n = 8); r = 0.36, p = 10 −5 ). Since these rats have a reduced mean number of PV+ interneurons (see ), and their spatial memory abilities are impaired (increased locomotor activity over three consecutive days versus the physiological habitual response or decreased locomotor activity over three days; see Petrovic et al. ), this positive correlation only indicates a possible important role of the hippocampal PV+ interneurons in the impaired spatial memory abilities in this experimental group of rats. To further explore the synaptic re-organization and the alteration of hippocampal PV+ interneurons, we investigated the hippocampal MAP2 and PSD-95 expression—but only in those experimental groups of rats with the opposite effect on PV+ interneurons expression in DG. Therefore, the hippocampal MAP2 and PSD-95 expression was followed in the control rats, the rats with PD cholinopathy, and the hemiparkinsonian rats with PD cholinopathy. 2.5.1. Hippocampal PV+ Interneurons Alteration and MAP2 Expression in the Distinct Rat Models of PD Neuropathology Our results show that the PD cholinopathy increased the number of PV+ interneurons within DG and suppressed MAP2 expression, particularly in the DG granular and polymorphic cells layers, and in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , PPT lesion). Conversely, the hemiparkinsonism with PD cholinopathy reduced the number of PV+ interneurons within DG and enhanced MAP2 expression in the DG granular and polymorphic cell layers, and in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , SNpc/PPT lesion). 2.5.2. Hippocampal PV+ Interneurons Alteration and PSD-95 Expression in the Distinct Rat Models of PD Neuropathology In addition to suppressed MAP2 expression and an increased number of PV+ interneurons within DG, the PD cholinopathy also suppressed the overall hippocampal PSD-95 expression. The suppression of PSD-95 immunoreactivity was particularly evidenced in the granular cell layer and molecular layer of DG, as well as in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , PPT lesion). On the other hand, the hemiparkinsonism with PD cholinopathy reduced the number of PV+ interneurons in DG and enhanced PSD-95 expression—but only within the granular cell layer and molecular layer of DG ( , and , SNpc/PPT lesion). Our results show that the PD cholinopathy increased the number of PV+ interneurons within DG and suppressed MAP2 expression, particularly in the DG granular and polymorphic cells layers, and in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , PPT lesion). Conversely, the hemiparkinsonism with PD cholinopathy reduced the number of PV+ interneurons within DG and enhanced MAP2 expression in the DG granular and polymorphic cell layers, and in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , SNpc/PPT lesion). In addition to suppressed MAP2 expression and an increased number of PV+ interneurons within DG, the PD cholinopathy also suppressed the overall hippocampal PSD-95 expression. The suppression of PSD-95 immunoreactivity was particularly evidenced in the granular cell layer and molecular layer of DG, as well as in the pyramidal cell layer and stratum radiatum of the hippocampal CA3 region ( , and , PPT lesion). On the other hand, the hemiparkinsonism with PD cholinopathy reduced the number of PV+ interneurons in DG and enhanced PSD-95 expression—but only within the granular cell layer and molecular layer of DG ( , and , SNpc/PPT lesion). In order to investigate the cellular substrate for the induced and distinctly altered sleep spindle dynamics in the distinct models of PD neuropathology, particularly the HVSs dynamic during hippocampal REM sleep, we also followed the alteration of PV+ interneurons of RT, which are important in sleep spindle generation. We demonstrate the suppression of PV+ interneurons expression in RT of the hemiparkinsonian rats with PD cholinopathy, as well as in the hippocampal DG ( , SNpc/PPT lesion). Although we did not quantify the number of PV+ interneurons of RT, besides a suppression of the PV+ interneurons expression in almost all the SNpc/PPT lesioned rats ( n = 7/8), there were also the obvious defects in the dorsal or ventral part of RT ( n = 3/7), immunostained with PV, on the brain side ipsilateral to the combined SNpc and PPT lesions (always the right brain side) vs. the contralateral brain side, the control rats, the bilateral PPT lesioned rats, and the SNpc lesioned rats . The overall antero-posterior dimensions of these defects in PV immunostainings were from 0.64 to 1.84 mm. depicts the typical individual examples of RT PV immunostaining per each experimental group of rats ( A), with the typical example of an overall antero-posterior dimensions of the dorsal and ventral defects within RT ( Bb, Bd, and Bf) vs. the contralateral RT ( Ba, Bc, and Be). In this typical example of the suppressed PV+ interneurons expression within RT in the hemiparkinsonian rat with PD cholinopathy, the defect of PV immunostaining was spread from −2.07 to −3.91 mm posterior from bregma (overall the antero-posterior defect of PV immunostaining was 1.84 mm). In parallel to the hippocampal synaptic re-organization, we also investigated MAP2 and PSD-95 expression within RT in the control rats, the rats with PD cholinopathy, and the hemiparkinsonian rats with PD cholinopathy. 2.7.1. PV+ Interneurons Alteration of RT and MAP2 Expression in the Distinct Rat Models of PD Neuropathology We evidenced that the hemiparkinsonism with PD cholinopathy reduced PV+ interneurons expression within RT and potentiated MAP2 expression, ipsilaterally to the combined SNpc and PPT lesions (the right brain side) likewise in the hippocampus, and versus the RT of contralateral brain side (left side with only the PPT lesion), as well as versus the control rats, and the bilaterally PPT lesioned rats ( , SNpc/PPT lesion). In addition, MAP2 expression was partially enhanced in the dorsal part of RT in the rats with PD cholinopathy ( , PPT lesion). The typical examples of MAP2 expression within the RT of the experimental groups of rats with the opposite effect on PV+ interneurons expression in the hippocampal DG and RT (PPT lesion and SNpc/PPT lesion) and versus controls (Control) are depicted in A. 2.7.2. PV+ Interneurons Alteration of RT and PSD-95 Expression in the Distinct Rat Models of PD Neuropathology Although there was no alteration of the number of PV+ interneurons and MAP2 expression was partially increased (see ), there was the suppression of PSD-95 expression in RT of the rats with PD cholinopathy ( , PPT lesion). On the other hand, there was the reduced PV+ interneurons expression within RT of the hemiparkinsonian rats with PD cholinopathy along with MAP2 potentiation, and PSD-95 expression was similar to the control level ( , SNpc/PPT lesion). Typical examples of PSD-95 expression within RT of the experimental groups of rats with the opposite effect on PV+ interneurons expression in the hippocampal DG and RT (PPT lesion and SNpc/PPT lesion) and versus controls (Control) are depicted in B. We evidenced that the hemiparkinsonism with PD cholinopathy reduced PV+ interneurons expression within RT and potentiated MAP2 expression, ipsilaterally to the combined SNpc and PPT lesions (the right brain side) likewise in the hippocampus, and versus the RT of contralateral brain side (left side with only the PPT lesion), as well as versus the control rats, and the bilaterally PPT lesioned rats ( , SNpc/PPT lesion). In addition, MAP2 expression was partially enhanced in the dorsal part of RT in the rats with PD cholinopathy ( , PPT lesion). The typical examples of MAP2 expression within the RT of the experimental groups of rats with the opposite effect on PV+ interneurons expression in the hippocampal DG and RT (PPT lesion and SNpc/PPT lesion) and versus controls (Control) are depicted in A. Although there was no alteration of the number of PV+ interneurons and MAP2 expression was partially increased (see ), there was the suppression of PSD-95 expression in RT of the rats with PD cholinopathy ( , PPT lesion). On the other hand, there was the reduced PV+ interneurons expression within RT of the hemiparkinsonian rats with PD cholinopathy along with MAP2 potentiation, and PSD-95 expression was similar to the control level ( , SNpc/PPT lesion). Typical examples of PSD-95 expression within RT of the experimental groups of rats with the opposite effect on PV+ interneurons expression in the hippocampal DG and RT (PPT lesion and SNpc/PPT lesion) and versus controls (Control) are depicted in B. Our present study demonstrates the underlying alterations of hippocampal and RT GABAergic PV+ interneurons expression and their distinct synaptic re-organizations during the hippocampal (local) prodromal sleep disorders in the distinct rat models of PD neuropathology. Particularly, whereas the PD cholinopathy enhanced, the hemiparkinsonism with PD cholinopathy reduced the number of hippocampal PV+ interneurons in DG ( and ). This opposite alteration of the hippocampal GABAergic PV+ interneurons expression in DG was induced by the bilateral cholinergic deficit higher than 20% throughout the overall PPT antero-posterior dimension (PD cholinopathy) or by the unilateral SNpc dopaminergic deficit higher than 48% along with the bilateral PPT cholinergic deficit higher than 20% throughout the overall PPT/SNpc antero-posterior dimensions (hemiparkinsonism with PD cholinopathy). Moreover, while there was no correlation of the SNpc dopaminergic deficits with GABAergic PV+ interneurons expression in the hemiparkinsonian rats, the PPT cholinergic deficit was significantly and positively correlated with the number of GABAergic PV+ interneurons in hippocampal DG of the rats with PD cholinopathy, as well as in the hemiparkinsonian rats with PD cholinopathy . Namely, the higher the PPT cholinergic deficit is, the higher the number of GABAergic PV+ interneurons ( A). However, we did not find any functional coupling between the number of the hippocampal PV+ interneurons in DG and the hippocampal Wake/NREM/REM states’ duration or their episode number/duration, and the NREM/REM delta, theta, sigma, beta, and gamma EEG relative amplitudes in control rats, and any model of PD neuropathology. We have only evidenced the functional coupling between the altered number of hippocampal PV+ interneurons in DG and the hippocampal Wake episodes’ duration in the hemiparkinsonian rats with PD cholinopathy . Namely, as much as the number of hippocampal PV+ interneurons of DG is reduced, the duration of the hippocampal Wake episodes is shorter in the hemiparkinsonian rats with PD cholinopathy. In addition, our study demonstrates for the first time that REM sleep is a predisposing state for the HVSs generation in all experimental models of PD neuropathology, particularly during hippocampal REM sleep in the hemiparkinsonian models. Moreover, PD cholinopathy prolongs both the HVSs and SSs duration and increases the density of the induced HVSs, particularly during REM sleep ( and , ). However, we did not find any functional coupling between the parameters of HVS dynamics, which were dominantly induced during hippocampal REM sleep in all experimental models of PD, with the number of hippocampal PV+ interneurons. However, the inter-structure differences of the mean HVS frequency during REM sleep (both types of the sleep spindles are always slower oscillations within the hippocampus vs. the motor cortex) were abolished in the hemiparkinsonian rats with PD cholinopathy, which was due to an increased HVS frequency in the hippocampus ( , B). Although we previously demonstrated impaired spatial memory abilities in both rat models of the hemiparkinsonian rats versus the controls and rats with PD cholinopathy , our present study demonstrates the significant positive correlation between the number of hippocampal PV+ interneurons in the DG (reduced number of PV+ interneurons) and pathological habitual response (an increased locomotor activity over three consecutive days) only in the hemiparkinsonian rats with PD cholinopathy . However, since we used the habitual response as an indirect measure of the memory abilities (not an appropriate test of memory abilities) and we quantified the PV+ interneurons on .tiff images and only within the overall dimension of DG, this positive correlation may only indicate a possible important role of the hippocampal GABAergic PV+ interneurons in the impaired spatial memory abilities in this experimental group of rats , which needs further investigation. Furthermore, our results demonstrate the opposite alteration of the hippocampal GABAergic PV+ interneurons expression of DG in the PD cholinopathy vs. the hemiparkinsonism with PD cholinopathy along with the distinct local (hippocampal) and remote (RT) MAP2 and PSD-95 expressions. While the PD cholinopathy enhanced hippocampal PV+ interneurons expression in DG and suppressed the hippocampal MAP2 and PSD-95 expression, the hemiparkinsonism with PD cholinopathy reduced hippocampal PV+ interneurons expression of DG and induced an overexpression of the hippocampal MAP2 and PSD-95 ( , , and , ). In addition to the locally enhanced PV+ interneurons expression and MAP2/PSD-95 suppression in the hippocampus, there was no alteration of PV+ interneurons expression within RT, and there was a partial remote enhancement of MAP2 expression only in the dorsal part of RT along with PSD-95 suppression during PD cholinopathy ( and ). Conversely, there was the reduced hippocampal/RT number of PV+ interneurons in the hemiparkinsonian rats with PD cholinopathy along with the enhanced hippocampal/RT MAP2 expression, and there was enhanced PSD-95 expression only in the hippocampus ( and , and ). In our present study, we evidenced the opposite level of the parvalbumin, pre-synaptic protein MAP2, and postsynaptic excitatory protein PSD-95 expressions locally in the hippocampus, which could be the underlying mechanisms of distinct hippocampal prodromal sleep disorders in the PD cholinopathy vs. the hemiparkinsonism with PD cholinopathy. On the other hand, the suppression of excitation (detected by the lack of excitatory synaptic protein PSD-95 expression) in the RT of the rats with PD cholinopathy vs. the hemiparkinsonian rats with PD cholinopathy indicates the important PPT cholinergic afferent system and the parvalbumin GABA neurons regulatory role in RT. The lack of excitation in RT (no PV+ interneurons expression change along with suppressed PSD95 expression in RT) of the rats with PD cholinopathy vs. hemiparkinsonian rats with PD cholinopathy could be a reason for the prolongation of both the HVSs and SSs duration and an increased density of the induced HVSs, particularly during REM sleep ( and , ). According to the new GABA collapse hypothesis , PD is the multisystemic neurodegenerative disease whose clinical symptoms reflect the localization and progression of the most advanced GABA pathology. In addition, the hippocampal GABA PV-expressing interneurons coordinate the hippocampal network dynamics required for memory consolidation . PV+ interneurons are a major type of the GABAergic inhibitory interneurons in the brain, which are characterized by their short action potential duration and their ability to fire at high frequencies . They have multiple dendrites receiving inputs from diverse afferent pathways as well as the numerous perisomatic boutons onto excitatory neurons, together resulting in an integrated feedforward and feedback inhibitory control of both the local circuitry and remote neuronal networks . They play a crucial role in determining oscillatory network activity and in regulating plasticity following behavioral learning , and they are of crucial importance in spatial memory consolidation in the hippocampus . Recent study suggests that immediately following learning, the hippocampal PV+ interneurons drive local oscillations and the reactivation of local neuronal populations, which directly promotes network plasticity and long-term memory formation . PV+ interneurons dysfunction has been linked to several brain diseases that involve memory deficits . Our present results are in accordance with evidence that the prodromal non-motor symptoms of PD are related to the GABAergic system, and that distinct sleep disorders, as the prodromal symptoms of PD, may further accelerate the neurodegenerative processes . Although the sleep disorders in PD have multifactorial etiology, the pathological degeneration of neuronal populations, representing the main sleep regulation centers in the brainstem (such as the PPT) and thalamo-cortical pathways (such as the RT), is probably the most relevant factor. It should be noted here that the GABAergic system is involved in every aspect of sleep regulation, and that intraneuronal ion equilibrium, including the optimal calcium level, can be fully recovered during sleep . Furthermore, it is well known that all brain structures, including the structures related to sleep regulation, are anatomically and neurochemically heterogeneous neuronal populations sharing a common mechanism that controls their activity and metabolism through a complex interaction between the GABA and Ca 2+ -dependent neurotransmission and Ca 2+ -dependent neuronal metabolism . The Ca 2+ /GABA mechanism stabilizes neuronal activity at the cellular and systemic level . GABA is the main inhibitory neurotransmitter within the central nervous system . Synaptic transmission, signal transmission, adaptive adjustments, and memory are Ca 2+ /GABA-related mechanisms . The GABA system protects the neurons by the control of calcium influx directly via GABA receptors or indirectly via astrocytes and glial networks . There is evidence that the excessive neuronal activity is firstly tuned by increased GABA inhibition and then further, if necessary, by the reduction of GABA synaptic receptors and calcium channels . Therefore, the GABA interneurons are the homeostatic regulators of synaptic inhibition within the cellular networks, and GABA decline etiology appears to apply to all human neurodegenerative processes initiated by abnormal intracellular calcium levels. The reduction of PV-expressing hippocampal GABA interneurons was reported in several mouse models of autism with an evidenced excitation/inhibition balance shifted toward enhanced inhibition, without GABA neuronal loss, but with PV downregulation . In addition, in the rat model of depression, the PV expression in GABAergic interneurons was reduced in all regions of the hippocampus . Moreover, there is evidence that a reduced PV expression (low levels of the Ca 2+ binding protein—parvalbumin), via reduction of calcium-buffering capacity, may increase the vulnerability to excitotoxicity , as well as an overexpressing of the PV neurons were particularly resistant to excitotoxicity and cell death . Our results show that the PD cholinopathy induced an overexpression of the PV+ interneurons in the hippocampal DG but did not alter the PV+ interneurons expression remotely in RT. Conversely, the hemiparkinsonism with PD cholinopathy reduced the PV+ interneurons expression in hippocampal DG and RT, along with enhanced MAP2 expression in both brain structures, and hippocampal DG enhanced/RT no changed PSD-95 expression, suggesting severe presynaptic and postsynaptic re-organizations at the hippocampal and thalamic level. Therefore, at the level of local field potential, such as sleep spindle, the increased mean intrinsic frequency of hippocampal HVS during REM sleep in the hemiparkinsonian rats with PD cholinopathy ( B) could be a consequence of PV+ interneurons reduction and/or presynaptic and postsynaptic re-organization in the RT. Moreover, our present study suggests a possible protective role of the hippocampal PV overexpression on the synaptic re-organization in the local hippocampal network, as well as remotely in the RT during the PD cholinopathy, conversely to the reduced hippocampal/RT PV expression during hemiparkinsonism with PD cholinopathy, which induced MAP2/PSD-95 hippocampal overexpression, along with MAP2 overexpression/no changed PSD-95 expression, remotely in the RT. Our results imply an important regulatory role of the PPT cholinergic and the SNpc dopaminergic afferent system on the hippocampal and RT synaptic re-organizations through GABAergic PV+ interneurons. It should be noted here that RT is a thin sheet of GABAergic neurons chemically and electrically interconnected , which surrounds other thalamic nuclei and has a key role in sleep rhythm generation, particularly in sleep spindles generation, but also in delta and slow oscillations . In addition to the anatomical, morphological, and neurochemical heterogeneity of RT and its crucial implication in sleep rhythm, the RT was recently implicated in the regulation of local sleep heterogeneity through parallel thalamo-cortical loops . Namely, the RT is strongly innervated by cortical inputs and is a part of reciprocally connected and focalized thalamo-cortical loops. Therefore, the cortical activity could drive RT, which would in return influence the cortex in a heterogeneous local manner . The origin of cortical and subcortical afferents and the thalamic target define the anatomical subregions of RT: while the postero-dorsal part of RT is involved in visual and somatosensory modalities, the anterior part of RT is involved in motor and limbic structures. RT is topographically segregated into different parts with different cellular properties that tune the type of local sleep patterns and local sleep oscillations through distinct thalamo-cortical loops. Therefore, the sleep rhythm or sleep oscillations abnormalities in distinct diseases could be related to altered activity in the local parts of RT. For example, a strong deficit in sleep spindles possibly arises due to impaired RT activity . Our present results suggest that distinctly altered GABAergic PV+ interneurons along with a synaptic re-organization in the RT local network could be the underlying mechanisms of HVS generation, particularly during REM sleep, as well as of distinct HVS dynamics in the distinct rat models of PD neuropathology. Our results demonstrate the reduced PV+ interneurons/enhanced MAP2/no change of PSD-95 expression throughout all of the topographically determined RT functional subregions. In addition, there is evidence that the alteration of an inhibitory transmission in the hippocampus, in particular by the PV+ interneurons, is linked to the spatial memory deficits, and that early treatment of PV interneuron hyperactivity might be clinically relevant in preventing memory decline, local network hyperexcitability, and delaying a progression of neurodegenerative disease such as Alzheimer’s disease . Furthermore, recent study by using in vivo Ca 2+ imaging and optogenetic evidenced that the activity of DG adult-born neurons during REM sleep is necessary for memory consolidation . Our present results related to the reduced PV+ interneurons expression alongside MAP2 overexpression and PSD-95 overexpression/no change in the hippocampus and RT in hemiparkinsonian rats with PD cholinopathy are in accordance with recent evidence that the dysfunction of GABAergic inhibition and a consequent imbalance between excitation and inhibition result in hyperexcitability and the desynchronization of neuronal networks, leading to impairment of information processing, learning, and memory formation . It should be noted here that a lot of evidence has implicated the calcium-related homeostatic mechanisms, giving rise to the Ca 2+ hypothesis of brain aging and cell death . Although the oxidative stress and calcium-induced excitotoxicity were considered as important pathophysiological mechanisms leading to neural cell death in PD, still, the factors that make the certain neurons vulnerable to neurodegeneration are unknown . Our study demonstrates for the first time an important regulatory role of the hippocampal and RT GABAergic PV+ interneurons and the synaptic protein dynamic alterations in distinct rat models of PD neuropathology, which are reflected prodromally, distinctly, and long-lasting at the functional level: from distinct local sleep disorders through to the distinct alteration of sleep-related EEG oscillations to distinct alteration of the sleep spindles dynamics. Our results in the rat models of PD neuropathology indicate that augmenting the GABAergic signaling via PV+ interneuron modulation can be effective in improving or ameliorating prodromal sleep disorders and memory deficits in PD. 4.1. Experimental Design We used 31 adult male Wistar rats (each two and a half months old, weighing between 250 and 290 g), which were chronically implanted for sleep recording. The rats were randomly divided into four experimental groups: control rats (implanted controls, n = 8), rats with PD cholinopathy (a bilateral PPT lesion group, n = 8), hemiparkinsonian rats (a unilateral SNpc lesion group, n = 7), and hemiparkinsonian rats with PD cholinopathy (a unilateral SNpc/bilateral PPT lesion group, n = 8). After the surgery and throughout the experimental protocol, the animals were individually housed in custom-made clear plexiglass cages (30 × 30 × 30 cm) on a 12 h light–dark cycle (7 a.m. lights on, 7 p.m. lights off) at 25 °C with food and water ad libitum. All the procedures were performed in accordance with EEC Directive (2010/63/EU) on the Protection of Animals Used for Experimental and other Scientific Purposes, and the protocol was approved by the Ethics Committee for the Protection of Welfare of Experimental Animals of the Institute for Biological Research “Siniša Stanković”—National Institute of Republic of Serbia, University of Belgrade (Approval No. 01–1490; 28/09/2020), and by the Veterinary Directorate, Department of Animal Welfare, Ministry of Agriculture, Forestry and Water Management of Republic of Serbia (Approval No. 323-07-10509/2020-05/1; 13/10/2020). 4.2. Surgical Procedure The surgical procedures for the chronic electrode implantation for sleep recording have been conducted as previously described . In brief, the rats were anesthetized with ketamine/diazepam anesthesia (50 mg/kg, Zoletil ® 50, VIRBAC, Carros, France; intraperitoneal injection) and positioned in a stereotaxic frame (Stoelting Co., Dublin, Ireland). We implanted two epidural stainless steel screw electrodes in the motor cortex (MCx; A/P: + 1.0 mm from bregma; R/L: 2.0 mm from the sagittal suture; D/V: 1.0 mm from the skull, in accordance with Paxinos and Watson ) and two wire electrodes (stainless-steel teflon-coated wire, Medwire, Mount Vernon, NY, USA) into the CA1 hippocampal regions (Hipp; A/P: −3.6 mm from the bregma; R/L: 2.5 mm from the sagittal suture; D/V: 2.5 mm from the brain surface, in accordance with Paxinos and Watson ). To assess skeletal muscle activity (EMG), the bilateral wire electrodes were implanted into the dorsal nuchal musculature, and a stainless-steel screw electrode was implanted in the nasal bone as a ground. All the electrode leads were soldered to a miniature connector plug (39F1401, Newark Electronics, Schaumburg, IL, USA), and the assembly was fixed to the screw electrodes and skull using acrylic dental cement (Biocryl-RN, Galenika a.d. Beograd, Serbia). All the lesions were performed by stereotaxically guided microinfusions during the same surgical procedure for the implantation of the EEG and EMG electrodes, using a Digital Lab Standard Stereotaxic Instrument (Stoelting Co., Dublin, Ireland) with a Quintessential Stereotaxic Injector (Stoelting Co., Wood Dale, IL, USA) and a Hamilton syringe (10 µL or 1 µL). PD cholinopathy in Wistar rats was induced by the bilateral PPT lesion, using ibotenic acid (IBO, Sigma-Aldrich, St. Louis, MO, USA). We infused 100 nL of 0.1 M IBO/0.1 M PBS bilaterally into the PPT (A/P: −7.8 mm from the bregma; R/L: 1.9 mm from the sagittal suture; D/V: 7.0 mm from the brain surface, following Paxinos and Watson ), as a continuous infusion over 60 s . The hemiparkinsonism was induced by the unilateral SNpc lesion, using the 6-hydroxy dopamine hydrobromide salt (6-OHDA, Sigma-Aldrich, St. Louis, MO, USA). We infused 1 µL of 6 µg/µL 6-OHDA, dissolved in ice cold sterile saline (0.9% NaCl), and supplemented with 0.2% ascorbic acid, which served as an anti-oxidant, into the right SNpc (A/P: −5.3 mm from the bregma; R: 2.4 mm from the sagittal suture; D/V: 7.4 mm from the brain surface, following Paxinos and Watson ). The 6-OHDA microinfusions were performed as a continuous infusion of 200 nL/min, at a constant flow rate, over 5 min . In order to minimize the uptake of 6-OHDA by the noradrenergic neurons, 30 min prior to the microinfusion, each rat received a bolus of desipramine hydrochloride (28.42 mg/kg, i.p., Sigma-Aldrich, Taufkirchen, Germany; pH = 7.4). To induce hemiparkinsonism with PD cholinopathy, we performed double lesioning, in this case both a unilateral SNpc lesion and a bilateral PPT lesion . After each microinfusion, the needle remained within the local brain tissue for 5 min, allowing the solution to diffuse within the PPT or SNpc. For the bilateral PPT lesions, the Hamilton syringe needle was always washed out following the first IBO microinfusion, before the microinfusion into the contralateral PPT. At the end of the surgical procedure, the scalp wounds were sutured, and the rats were allowed to recover for two weeks. 4.3. Recording Procedure All the sleep recording sessions were performed 14 days after the surgical procedure. Sleep was recorded for 6 h during the light phase, starting at 9 a.m. The EEG and EMG activities were differentially recorded. After conventional amplification and filtering (0.3–100 Hz band pass; A-M System Inc., Model 3600, Carlborg, WA, USA), the analogue data were digitized (at a sampling frequency of 256/s) using DataWave SciWorks Experimenter Version 8.0 (DataWave Technologies, Longmont, CO, USA), and the EEG and EMG activities were displayed on a computer monitor and stored on a disk for further off-line analysis . 4.4. Behavioral Assessments Behavioral assessments were done a week following sleep recordings, during the light phase (starting at 9 a.m.), as previously described . Before each test, the animals were allowed to habituate to the experimental room for 30 min. The basal locomotor activity was monitored for 30 min using an Opto-Varimex Auto-Track System (Columbus Instruments, Columbus, OH, USA) and expressed as distance (centimeters) traveled in the open field arena. The spatial habituation test was performed over three consecutive sessions (locomotor activity during 30 min in the open arena) separated by 24 h intervals and served as an indirect measure of the spatial memory abilities. 4.5. Tissue Processing for Histology At the end of all the recordings and behavioral assessments, the rats were sacrificed for histology. All animals were deeply anesthetized with ketamine/diazepam and perfused transcardially, with 0.9% saline, followed by a 4% paraformaldehyde (PFA, Sigma Aldrich, Taufkirchen, Germany) in 0.1 M phosphate-buffered saline (PBS, pH = 7.4) and finally with a 10% sucrose solution in 0.1 M PBS. The brains were removed and immersed in 4% PFA overnight, and then in a 30% sucrose solution for several days. The brains were serially sectioned on the cryostat (Leica, Wetzlar, Germany) into coronal 40 μm-thick sections, and the free-floating sections were stored in a cryoprotective buffer for further use . 4.5.1. Lesion Identification and Quantification The PPT lesion was identified using NADPH–diaphorase histochemistry and quantified based on the number of NADPH–diaphorase positively stained cells within the PPT . As previously described , the free-floating sections were rinsed in 0.1 M PBS (pH = 7.4) and incubated for 1 h at 37 °C in the staining solution containing β-NADPH reduced tetrasodium salt (Serva, Heidelberg, Germany) and dimethyl sulfoxide (DMSO, Sigma-Aldrich, Taufkirchen, Germany) dissolved in substrate solution. The substrate solution contained nitro blue tetrazolium chloride (NBT, Serva, Heidelberg, Germany) and 5-bromo-4-chloro-3-indolyl phosphate (BCIP, Serva, Heidelberg, Germany) dissolved in the substrate buffer at pH = 9.5 (0.1 M Tris, 100 mM NaCl, 5 mM MgCl2). The background staining induced by the endogenous alkaline phosphate was reduced by 2 mM levamisole (Sigma-Aldrich, Taufkirchen, Germany). Finally, all the sections were mounted on slides, placed in a clearing agent (Xylene, Zorka Pharma, RS), coverslipped using DPX (Sigma-Aldrich, Burlington, MA, USA), and examined under a Zeiss Axiovert microscope with a camera (Zeiss, Jena, Germany). The SNpc lesion was identified by tyrosine hydroxylase (TH) immunohistochemistry and quantified based on the number of TH immunostained cells within the SNpc. The brain sections were initially thoroughly rinsed with 0.1 M PBS. The endogenous peroxidase activity was neutralized using 3% hydrogen peroxide/10% methanol/0.1 M PBS for 15 min, and non-specific binding was prevented by 60 min of incubation in 5% normal donkey serum (D9663, Sigma-Aldrich, Burlington, MA, USA)/0.1 M PBS at room temperature . The sections were further incubated for 48 h at +4 °C with a primary mouse monoclonal anti-TH antibody (dil. 1:16,000, T2928, Sigma-Aldrich, Burlington, MA, USA) in a blocking solution with 0.5% Triton X-100 (Sigma-Aldrich, Burlington, MA, USA), and subsequently for 90 min in polyclonal rabbit anti-mouse immunoglobulin (dil. 1:100, P0260, Agilent Dako, Glostrup, Denmark). Between each immunolabeling step, the sections were washed in fresh 0.1 M PBS (3 × 5 min). The immunoreactive signals were visualized using a diaminobenzidine solution (1% 3,3′–diaminobenzidine (11208, Acros organics, Geel, Belgium)/0.3% hydrogen peroxide/0.1 M PBS). All the sections were finally mounted on slides, dehydrated in a series of increasing ethanol solutions (ethanol 70%, 96%, 100%, Zorka Pharma, Sabac, RS), placed in a clearing agent (Xylene, Zorka Pharma, Sabac, RS), coverslipped with DPX (Sigma-Aldrich, Burlington, USA), and examined under a Leica light microscope with a camera(Leica, Wetzlar Germany). To test the specificity of the immunolabeling, the primary antibodies were omitted in the control experiments. The quantification of cholinergic and/or dopaminergic neuronal loss was done by counting the NADPH–diaphorase or TH positively stained cells using ImageJ 1.46 software . For this purpose, all the tissue samples of the corresponding experimental group and brain structure (three sections per each rat and each brain structure) were grouped into three stereotaxic ranges defined according to the overall PPT or SNpc antero-posterior dimension (for the SNpc lesion: −4.60–5.10, −5.20–5.70, and −5.80–6.30 mm posterior from the bregma; for the PPT lesion: −6.90–7.40, −7.50–8.00, and −8.10–8.60 mm posterior from the bregma). The neuronal losses were expressed with respect to the mean control absolute number for each sterotaxic range of the PPT/SNpc, which was taken as 100%. The unilateral SNpc lesions were quantified with respect to its corresponding contralateral SNpc, whereas the bilateral PPT lesions were quantified with respect to controls . 4.5.2. Immunohistochemistry for PV, MAP2 and PSD-95 The free-floating brain sections were initially thoroughly rinsed with 0.1 M PBS (pH = 7.4). The non-specific binding was prevented by incubation in 3% hydrogen peroxide/10% methanol/0.1 M PBS for 15 min and 5% normal donkey serum/0.1 M PBS (D9663, Sigma-Aldrich, Burlington, MA, USA) for 60 min at room temperature. Then, the sections were incubated overnight at +4 °C with the following primary antibodies: mouse monoclonal anti-PV antibody (dil. 1:2000, P3088, Sigma-Aldrich, Burlington, MA, USA) , mouse monoclonal anti-MAP2 antibody (dil. 1:6000, MAB378, Merck Millipore, Burlington, MA, USA), and mouse monoclonal anti-PSD-95 antibody (dil. 1:200, MAB1598, Merck Millipore, Burlington, MA, USA). The primary antibodies were diluted in PBS containing 0.5% Triton X-100 (for anti-PV) or 0.1% Triton X-100 (for anti-MAP2 and anti-PSD-95). After three 5-min washes in 0.1 M PBS, the sections were incubated for 90 min with polyclonal rabbit anti-mouse immunoglobulin (dil. 1:100, P0260, Agilent Dako, Glostrup, Denmark). The immunoreactive signals were visualized using a diaminobenzidine solution (1% 3,3–diaminobenzidine [11208, Acros organics, Geel, Belgium]/0.3% hydrogen peroxide/0.1 M PBS). All the sections were finally mounted on slides, dehydrated through increasing alcohol concentrations (70%, 96%, and 100% ethanol, Zorka Pharma, Sabac, RS), placed in a clearing agent (Xylene, Zorka Pharma, Sabac, RS), coverslipped with DPX (Sigma-Aldrich, Burlington, MA, USA), and examined under a Leica light microscope with a camera (Leica, Wetzlar Germany). To test the specificity of immunostaining, the primary antibodies were omitted in the control experiments. 4.5.3. Quantification of PV Immunostaining The quantification of PV immunoreactivity within the dentate gyrus (DG) was done by using the ImageJ 1.46 software (NIH, Bethesda, MD, USA,) and counting the number of PV immunoreactive (PV+) interneurons. For this purpose, all the tissue samples of the corresponding experimental groups were grouped into three stereotaxic ranges covering the overall hippocampal antero-posterior dimension. The defined stereotaxic ranges were −1.50–3.00 mm, −3.10–4.60 mm, and −4.70–6.20 mm posterior from bregma. For all experimental groups, the number of PV+ interneurons was counted per each brain side, at each stereotaxic range, pulled for each experimental group, and expressed as the mean number + SE. 4.6. Sleep Analysis The sleep analysis was done in MATLAB R2011a (MathWorks Inc., Natick, MA, USA) using software originally developed in MATLAB 6.5 . We applied the FFT algorithm to the signals acquired throughout each 6-hour recording (2160 10 s Fourier epochs in total) and automatically differentiated each 10 s epoch as Wake, NREM, or REM state . To assess the local sleep, we particularly extracted the simultaneous and non-simultaneous Wake/NREM/REM 10-s epochs of the motor cortex and the hippocampus for further analysis of the local sleep architecture (Wake/NREM/REM state duration), local state-related episode dynamics (Wake/NREM/REM episode number and episode duration), and local state-related EEG microstructure (Wake/NREM/REM relative amplitudes of all the conventional EEG frequency bands) . In addition, we have also analyzed the sleep spindle (SS) and high-voltage sleep spindle (HVS) dynamics during 1 h of NREM and REM sleep (extracted always between the 3rd and 4th hour of sleep recording) simultaneously recorded in the motor cortex and hippocampus. The automatic detection of SSs and HVSs was followed by visual validation of all the detected SSs and HVSs before the final extraction and analysis . Namely, after the EEG signals were band-pass filtered (11–17 Hz for SS and 4.1–10 Hz for HVS), we applied the continuous wavelet transform with the mother wavelet “cmorl-2” function, providing a complex Morlet wavelet with a determined central frequency f 0 = 2 . Additional detection criteria included a minimum duration set to 0.5 s for SS and 1 s for HVS. However, automatic detection had to be visually corrected, since some detections were false positive, false negative, or inaccurate (oscillation was not detected in the overall duration). For final analysis of spindle dynamics (mean density, mean intrinsic frequency, mean duration per 1 h of NREM and REM sleep), all the visually detected SSs or HVSs were extracted and concatenated for each structure (motor cortex or hippocampus), each state (NREM or REM sleep), and each experimental group (Control, PPT lesion, SNpc lesion, and SNpc/PPT lesion). 4.7. Statistical Analysis All statistical analyses were performed using a Kruskal–Wallis ANOVA (χ 2 values) with the Mann–Whitney U (z-values) two-tailed post hoc test. The accepted level of significance in all cases was p ≤ 0.05. For the correlation analysis, we employed Pearson’s correlation coefficient with the accepted level of significance of p ≤ 0.05. We used 31 adult male Wistar rats (each two and a half months old, weighing between 250 and 290 g), which were chronically implanted for sleep recording. The rats were randomly divided into four experimental groups: control rats (implanted controls, n = 8), rats with PD cholinopathy (a bilateral PPT lesion group, n = 8), hemiparkinsonian rats (a unilateral SNpc lesion group, n = 7), and hemiparkinsonian rats with PD cholinopathy (a unilateral SNpc/bilateral PPT lesion group, n = 8). After the surgery and throughout the experimental protocol, the animals were individually housed in custom-made clear plexiglass cages (30 × 30 × 30 cm) on a 12 h light–dark cycle (7 a.m. lights on, 7 p.m. lights off) at 25 °C with food and water ad libitum. All the procedures were performed in accordance with EEC Directive (2010/63/EU) on the Protection of Animals Used for Experimental and other Scientific Purposes, and the protocol was approved by the Ethics Committee for the Protection of Welfare of Experimental Animals of the Institute for Biological Research “Siniša Stanković”—National Institute of Republic of Serbia, University of Belgrade (Approval No. 01–1490; 28/09/2020), and by the Veterinary Directorate, Department of Animal Welfare, Ministry of Agriculture, Forestry and Water Management of Republic of Serbia (Approval No. 323-07-10509/2020-05/1; 13/10/2020). The surgical procedures for the chronic electrode implantation for sleep recording have been conducted as previously described . In brief, the rats were anesthetized with ketamine/diazepam anesthesia (50 mg/kg, Zoletil ® 50, VIRBAC, Carros, France; intraperitoneal injection) and positioned in a stereotaxic frame (Stoelting Co., Dublin, Ireland). We implanted two epidural stainless steel screw electrodes in the motor cortex (MCx; A/P: + 1.0 mm from bregma; R/L: 2.0 mm from the sagittal suture; D/V: 1.0 mm from the skull, in accordance with Paxinos and Watson ) and two wire electrodes (stainless-steel teflon-coated wire, Medwire, Mount Vernon, NY, USA) into the CA1 hippocampal regions (Hipp; A/P: −3.6 mm from the bregma; R/L: 2.5 mm from the sagittal suture; D/V: 2.5 mm from the brain surface, in accordance with Paxinos and Watson ). To assess skeletal muscle activity (EMG), the bilateral wire electrodes were implanted into the dorsal nuchal musculature, and a stainless-steel screw electrode was implanted in the nasal bone as a ground. All the electrode leads were soldered to a miniature connector plug (39F1401, Newark Electronics, Schaumburg, IL, USA), and the assembly was fixed to the screw electrodes and skull using acrylic dental cement (Biocryl-RN, Galenika a.d. Beograd, Serbia). All the lesions were performed by stereotaxically guided microinfusions during the same surgical procedure for the implantation of the EEG and EMG electrodes, using a Digital Lab Standard Stereotaxic Instrument (Stoelting Co., Dublin, Ireland) with a Quintessential Stereotaxic Injector (Stoelting Co., Wood Dale, IL, USA) and a Hamilton syringe (10 µL or 1 µL). PD cholinopathy in Wistar rats was induced by the bilateral PPT lesion, using ibotenic acid (IBO, Sigma-Aldrich, St. Louis, MO, USA). We infused 100 nL of 0.1 M IBO/0.1 M PBS bilaterally into the PPT (A/P: −7.8 mm from the bregma; R/L: 1.9 mm from the sagittal suture; D/V: 7.0 mm from the brain surface, following Paxinos and Watson ), as a continuous infusion over 60 s . The hemiparkinsonism was induced by the unilateral SNpc lesion, using the 6-hydroxy dopamine hydrobromide salt (6-OHDA, Sigma-Aldrich, St. Louis, MO, USA). We infused 1 µL of 6 µg/µL 6-OHDA, dissolved in ice cold sterile saline (0.9% NaCl), and supplemented with 0.2% ascorbic acid, which served as an anti-oxidant, into the right SNpc (A/P: −5.3 mm from the bregma; R: 2.4 mm from the sagittal suture; D/V: 7.4 mm from the brain surface, following Paxinos and Watson ). The 6-OHDA microinfusions were performed as a continuous infusion of 200 nL/min, at a constant flow rate, over 5 min . In order to minimize the uptake of 6-OHDA by the noradrenergic neurons, 30 min prior to the microinfusion, each rat received a bolus of desipramine hydrochloride (28.42 mg/kg, i.p., Sigma-Aldrich, Taufkirchen, Germany; pH = 7.4). To induce hemiparkinsonism with PD cholinopathy, we performed double lesioning, in this case both a unilateral SNpc lesion and a bilateral PPT lesion . After each microinfusion, the needle remained within the local brain tissue for 5 min, allowing the solution to diffuse within the PPT or SNpc. For the bilateral PPT lesions, the Hamilton syringe needle was always washed out following the first IBO microinfusion, before the microinfusion into the contralateral PPT. At the end of the surgical procedure, the scalp wounds were sutured, and the rats were allowed to recover for two weeks. All the sleep recording sessions were performed 14 days after the surgical procedure. Sleep was recorded for 6 h during the light phase, starting at 9 a.m. The EEG and EMG activities were differentially recorded. After conventional amplification and filtering (0.3–100 Hz band pass; A-M System Inc., Model 3600, Carlborg, WA, USA), the analogue data were digitized (at a sampling frequency of 256/s) using DataWave SciWorks Experimenter Version 8.0 (DataWave Technologies, Longmont, CO, USA), and the EEG and EMG activities were displayed on a computer monitor and stored on a disk for further off-line analysis . Behavioral assessments were done a week following sleep recordings, during the light phase (starting at 9 a.m.), as previously described . Before each test, the animals were allowed to habituate to the experimental room for 30 min. The basal locomotor activity was monitored for 30 min using an Opto-Varimex Auto-Track System (Columbus Instruments, Columbus, OH, USA) and expressed as distance (centimeters) traveled in the open field arena. The spatial habituation test was performed over three consecutive sessions (locomotor activity during 30 min in the open arena) separated by 24 h intervals and served as an indirect measure of the spatial memory abilities. At the end of all the recordings and behavioral assessments, the rats were sacrificed for histology. All animals were deeply anesthetized with ketamine/diazepam and perfused transcardially, with 0.9% saline, followed by a 4% paraformaldehyde (PFA, Sigma Aldrich, Taufkirchen, Germany) in 0.1 M phosphate-buffered saline (PBS, pH = 7.4) and finally with a 10% sucrose solution in 0.1 M PBS. The brains were removed and immersed in 4% PFA overnight, and then in a 30% sucrose solution for several days. The brains were serially sectioned on the cryostat (Leica, Wetzlar, Germany) into coronal 40 μm-thick sections, and the free-floating sections were stored in a cryoprotective buffer for further use . 4.5.1. Lesion Identification and Quantification The PPT lesion was identified using NADPH–diaphorase histochemistry and quantified based on the number of NADPH–diaphorase positively stained cells within the PPT . As previously described , the free-floating sections were rinsed in 0.1 M PBS (pH = 7.4) and incubated for 1 h at 37 °C in the staining solution containing β-NADPH reduced tetrasodium salt (Serva, Heidelberg, Germany) and dimethyl sulfoxide (DMSO, Sigma-Aldrich, Taufkirchen, Germany) dissolved in substrate solution. The substrate solution contained nitro blue tetrazolium chloride (NBT, Serva, Heidelberg, Germany) and 5-bromo-4-chloro-3-indolyl phosphate (BCIP, Serva, Heidelberg, Germany) dissolved in the substrate buffer at pH = 9.5 (0.1 M Tris, 100 mM NaCl, 5 mM MgCl2). The background staining induced by the endogenous alkaline phosphate was reduced by 2 mM levamisole (Sigma-Aldrich, Taufkirchen, Germany). Finally, all the sections were mounted on slides, placed in a clearing agent (Xylene, Zorka Pharma, RS), coverslipped using DPX (Sigma-Aldrich, Burlington, MA, USA), and examined under a Zeiss Axiovert microscope with a camera (Zeiss, Jena, Germany). The SNpc lesion was identified by tyrosine hydroxylase (TH) immunohistochemistry and quantified based on the number of TH immunostained cells within the SNpc. The brain sections were initially thoroughly rinsed with 0.1 M PBS. The endogenous peroxidase activity was neutralized using 3% hydrogen peroxide/10% methanol/0.1 M PBS for 15 min, and non-specific binding was prevented by 60 min of incubation in 5% normal donkey serum (D9663, Sigma-Aldrich, Burlington, MA, USA)/0.1 M PBS at room temperature . The sections were further incubated for 48 h at +4 °C with a primary mouse monoclonal anti-TH antibody (dil. 1:16,000, T2928, Sigma-Aldrich, Burlington, MA, USA) in a blocking solution with 0.5% Triton X-100 (Sigma-Aldrich, Burlington, MA, USA), and subsequently for 90 min in polyclonal rabbit anti-mouse immunoglobulin (dil. 1:100, P0260, Agilent Dako, Glostrup, Denmark). Between each immunolabeling step, the sections were washed in fresh 0.1 M PBS (3 × 5 min). The immunoreactive signals were visualized using a diaminobenzidine solution (1% 3,3′–diaminobenzidine (11208, Acros organics, Geel, Belgium)/0.3% hydrogen peroxide/0.1 M PBS). All the sections were finally mounted on slides, dehydrated in a series of increasing ethanol solutions (ethanol 70%, 96%, 100%, Zorka Pharma, Sabac, RS), placed in a clearing agent (Xylene, Zorka Pharma, Sabac, RS), coverslipped with DPX (Sigma-Aldrich, Burlington, USA), and examined under a Leica light microscope with a camera(Leica, Wetzlar Germany). To test the specificity of the immunolabeling, the primary antibodies were omitted in the control experiments. The quantification of cholinergic and/or dopaminergic neuronal loss was done by counting the NADPH–diaphorase or TH positively stained cells using ImageJ 1.46 software . For this purpose, all the tissue samples of the corresponding experimental group and brain structure (three sections per each rat and each brain structure) were grouped into three stereotaxic ranges defined according to the overall PPT or SNpc antero-posterior dimension (for the SNpc lesion: −4.60–5.10, −5.20–5.70, and −5.80–6.30 mm posterior from the bregma; for the PPT lesion: −6.90–7.40, −7.50–8.00, and −8.10–8.60 mm posterior from the bregma). The neuronal losses were expressed with respect to the mean control absolute number for each sterotaxic range of the PPT/SNpc, which was taken as 100%. The unilateral SNpc lesions were quantified with respect to its corresponding contralateral SNpc, whereas the bilateral PPT lesions were quantified with respect to controls . 4.5.2. Immunohistochemistry for PV, MAP2 and PSD-95 The free-floating brain sections were initially thoroughly rinsed with 0.1 M PBS (pH = 7.4). The non-specific binding was prevented by incubation in 3% hydrogen peroxide/10% methanol/0.1 M PBS for 15 min and 5% normal donkey serum/0.1 M PBS (D9663, Sigma-Aldrich, Burlington, MA, USA) for 60 min at room temperature. Then, the sections were incubated overnight at +4 °C with the following primary antibodies: mouse monoclonal anti-PV antibody (dil. 1:2000, P3088, Sigma-Aldrich, Burlington, MA, USA) , mouse monoclonal anti-MAP2 antibody (dil. 1:6000, MAB378, Merck Millipore, Burlington, MA, USA), and mouse monoclonal anti-PSD-95 antibody (dil. 1:200, MAB1598, Merck Millipore, Burlington, MA, USA). The primary antibodies were diluted in PBS containing 0.5% Triton X-100 (for anti-PV) or 0.1% Triton X-100 (for anti-MAP2 and anti-PSD-95). After three 5-min washes in 0.1 M PBS, the sections were incubated for 90 min with polyclonal rabbit anti-mouse immunoglobulin (dil. 1:100, P0260, Agilent Dako, Glostrup, Denmark). The immunoreactive signals were visualized using a diaminobenzidine solution (1% 3,3–diaminobenzidine [11208, Acros organics, Geel, Belgium]/0.3% hydrogen peroxide/0.1 M PBS). All the sections were finally mounted on slides, dehydrated through increasing alcohol concentrations (70%, 96%, and 100% ethanol, Zorka Pharma, Sabac, RS), placed in a clearing agent (Xylene, Zorka Pharma, Sabac, RS), coverslipped with DPX (Sigma-Aldrich, Burlington, MA, USA), and examined under a Leica light microscope with a camera (Leica, Wetzlar Germany). To test the specificity of immunostaining, the primary antibodies were omitted in the control experiments. 4.5.3. Quantification of PV Immunostaining The quantification of PV immunoreactivity within the dentate gyrus (DG) was done by using the ImageJ 1.46 software (NIH, Bethesda, MD, USA,) and counting the number of PV immunoreactive (PV+) interneurons. For this purpose, all the tissue samples of the corresponding experimental groups were grouped into three stereotaxic ranges covering the overall hippocampal antero-posterior dimension. The defined stereotaxic ranges were −1.50–3.00 mm, −3.10–4.60 mm, and −4.70–6.20 mm posterior from bregma. For all experimental groups, the number of PV+ interneurons was counted per each brain side, at each stereotaxic range, pulled for each experimental group, and expressed as the mean number + SE. The PPT lesion was identified using NADPH–diaphorase histochemistry and quantified based on the number of NADPH–diaphorase positively stained cells within the PPT . As previously described , the free-floating sections were rinsed in 0.1 M PBS (pH = 7.4) and incubated for 1 h at 37 °C in the staining solution containing β-NADPH reduced tetrasodium salt (Serva, Heidelberg, Germany) and dimethyl sulfoxide (DMSO, Sigma-Aldrich, Taufkirchen, Germany) dissolved in substrate solution. The substrate solution contained nitro blue tetrazolium chloride (NBT, Serva, Heidelberg, Germany) and 5-bromo-4-chloro-3-indolyl phosphate (BCIP, Serva, Heidelberg, Germany) dissolved in the substrate buffer at pH = 9.5 (0.1 M Tris, 100 mM NaCl, 5 mM MgCl2). The background staining induced by the endogenous alkaline phosphate was reduced by 2 mM levamisole (Sigma-Aldrich, Taufkirchen, Germany). Finally, all the sections were mounted on slides, placed in a clearing agent (Xylene, Zorka Pharma, RS), coverslipped using DPX (Sigma-Aldrich, Burlington, MA, USA), and examined under a Zeiss Axiovert microscope with a camera (Zeiss, Jena, Germany). The SNpc lesion was identified by tyrosine hydroxylase (TH) immunohistochemistry and quantified based on the number of TH immunostained cells within the SNpc. The brain sections were initially thoroughly rinsed with 0.1 M PBS. The endogenous peroxidase activity was neutralized using 3% hydrogen peroxide/10% methanol/0.1 M PBS for 15 min, and non-specific binding was prevented by 60 min of incubation in 5% normal donkey serum (D9663, Sigma-Aldrich, Burlington, MA, USA)/0.1 M PBS at room temperature . The sections were further incubated for 48 h at +4 °C with a primary mouse monoclonal anti-TH antibody (dil. 1:16,000, T2928, Sigma-Aldrich, Burlington, MA, USA) in a blocking solution with 0.5% Triton X-100 (Sigma-Aldrich, Burlington, MA, USA), and subsequently for 90 min in polyclonal rabbit anti-mouse immunoglobulin (dil. 1:100, P0260, Agilent Dako, Glostrup, Denmark). Between each immunolabeling step, the sections were washed in fresh 0.1 M PBS (3 × 5 min). The immunoreactive signals were visualized using a diaminobenzidine solution (1% 3,3′–diaminobenzidine (11208, Acros organics, Geel, Belgium)/0.3% hydrogen peroxide/0.1 M PBS). All the sections were finally mounted on slides, dehydrated in a series of increasing ethanol solutions (ethanol 70%, 96%, 100%, Zorka Pharma, Sabac, RS), placed in a clearing agent (Xylene, Zorka Pharma, Sabac, RS), coverslipped with DPX (Sigma-Aldrich, Burlington, USA), and examined under a Leica light microscope with a camera(Leica, Wetzlar Germany). To test the specificity of the immunolabeling, the primary antibodies were omitted in the control experiments. The quantification of cholinergic and/or dopaminergic neuronal loss was done by counting the NADPH–diaphorase or TH positively stained cells using ImageJ 1.46 software . For this purpose, all the tissue samples of the corresponding experimental group and brain structure (three sections per each rat and each brain structure) were grouped into three stereotaxic ranges defined according to the overall PPT or SNpc antero-posterior dimension (for the SNpc lesion: −4.60–5.10, −5.20–5.70, and −5.80–6.30 mm posterior from the bregma; for the PPT lesion: −6.90–7.40, −7.50–8.00, and −8.10–8.60 mm posterior from the bregma). The neuronal losses were expressed with respect to the mean control absolute number for each sterotaxic range of the PPT/SNpc, which was taken as 100%. The unilateral SNpc lesions were quantified with respect to its corresponding contralateral SNpc, whereas the bilateral PPT lesions were quantified with respect to controls . The free-floating brain sections were initially thoroughly rinsed with 0.1 M PBS (pH = 7.4). The non-specific binding was prevented by incubation in 3% hydrogen peroxide/10% methanol/0.1 M PBS for 15 min and 5% normal donkey serum/0.1 M PBS (D9663, Sigma-Aldrich, Burlington, MA, USA) for 60 min at room temperature. Then, the sections were incubated overnight at +4 °C with the following primary antibodies: mouse monoclonal anti-PV antibody (dil. 1:2000, P3088, Sigma-Aldrich, Burlington, MA, USA) , mouse monoclonal anti-MAP2 antibody (dil. 1:6000, MAB378, Merck Millipore, Burlington, MA, USA), and mouse monoclonal anti-PSD-95 antibody (dil. 1:200, MAB1598, Merck Millipore, Burlington, MA, USA). The primary antibodies were diluted in PBS containing 0.5% Triton X-100 (for anti-PV) or 0.1% Triton X-100 (for anti-MAP2 and anti-PSD-95). After three 5-min washes in 0.1 M PBS, the sections were incubated for 90 min with polyclonal rabbit anti-mouse immunoglobulin (dil. 1:100, P0260, Agilent Dako, Glostrup, Denmark). The immunoreactive signals were visualized using a diaminobenzidine solution (1% 3,3–diaminobenzidine [11208, Acros organics, Geel, Belgium]/0.3% hydrogen peroxide/0.1 M PBS). All the sections were finally mounted on slides, dehydrated through increasing alcohol concentrations (70%, 96%, and 100% ethanol, Zorka Pharma, Sabac, RS), placed in a clearing agent (Xylene, Zorka Pharma, Sabac, RS), coverslipped with DPX (Sigma-Aldrich, Burlington, MA, USA), and examined under a Leica light microscope with a camera (Leica, Wetzlar Germany). To test the specificity of immunostaining, the primary antibodies were omitted in the control experiments. The quantification of PV immunoreactivity within the dentate gyrus (DG) was done by using the ImageJ 1.46 software (NIH, Bethesda, MD, USA,) and counting the number of PV immunoreactive (PV+) interneurons. For this purpose, all the tissue samples of the corresponding experimental groups were grouped into three stereotaxic ranges covering the overall hippocampal antero-posterior dimension. The defined stereotaxic ranges were −1.50–3.00 mm, −3.10–4.60 mm, and −4.70–6.20 mm posterior from bregma. For all experimental groups, the number of PV+ interneurons was counted per each brain side, at each stereotaxic range, pulled for each experimental group, and expressed as the mean number + SE. The sleep analysis was done in MATLAB R2011a (MathWorks Inc., Natick, MA, USA) using software originally developed in MATLAB 6.5 . We applied the FFT algorithm to the signals acquired throughout each 6-hour recording (2160 10 s Fourier epochs in total) and automatically differentiated each 10 s epoch as Wake, NREM, or REM state . To assess the local sleep, we particularly extracted the simultaneous and non-simultaneous Wake/NREM/REM 10-s epochs of the motor cortex and the hippocampus for further analysis of the local sleep architecture (Wake/NREM/REM state duration), local state-related episode dynamics (Wake/NREM/REM episode number and episode duration), and local state-related EEG microstructure (Wake/NREM/REM relative amplitudes of all the conventional EEG frequency bands) . In addition, we have also analyzed the sleep spindle (SS) and high-voltage sleep spindle (HVS) dynamics during 1 h of NREM and REM sleep (extracted always between the 3rd and 4th hour of sleep recording) simultaneously recorded in the motor cortex and hippocampus. The automatic detection of SSs and HVSs was followed by visual validation of all the detected SSs and HVSs before the final extraction and analysis . Namely, after the EEG signals were band-pass filtered (11–17 Hz for SS and 4.1–10 Hz for HVS), we applied the continuous wavelet transform with the mother wavelet “cmorl-2” function, providing a complex Morlet wavelet with a determined central frequency f 0 = 2 . Additional detection criteria included a minimum duration set to 0.5 s for SS and 1 s for HVS. However, automatic detection had to be visually corrected, since some detections were false positive, false negative, or inaccurate (oscillation was not detected in the overall duration). For final analysis of spindle dynamics (mean density, mean intrinsic frequency, mean duration per 1 h of NREM and REM sleep), all the visually detected SSs or HVSs were extracted and concatenated for each structure (motor cortex or hippocampus), each state (NREM or REM sleep), and each experimental group (Control, PPT lesion, SNpc lesion, and SNpc/PPT lesion). All statistical analyses were performed using a Kruskal–Wallis ANOVA (χ 2 values) with the Mann–Whitney U (z-values) two-tailed post hoc test. The accepted level of significance in all cases was p ≤ 0.05. For the correlation analysis, we employed Pearson’s correlation coefficient with the accepted level of significance of p ≤ 0.05. |
A pilot study of team-based learning in one-hour pediatrics residency conferences | a41c39b0-7667-408d-81be-12408505ddf6 | 6637552 | Pediatrics[mh] | Active learning methods improve knowledge retention, facilitate feedback, and motivate learners . They are also well-suited to foster non-technical skills, such as communication and teamwork. Despite the benefits of active learning, implementation challenges arise in residency due to time constraints. Methods such as flipped classroom and simulation are often time-intensive and difficult to integrate into one-hour conferences . Team-based learning (TBL) is an active learning method that is widely used in medical school education. Each TBL session follows a structured approach: preparation, readiness assurance tests (RATs), and application exercises . Preparation supports foundational knowledge acquisition through readings or videos completed before class. In class, learners’ grasp of the foundational knowledge and preparation is first assessed with an individual readiness assurance test (IRAT). Learners then work through the same test in groups, termed the group readiness assurance test (GRAT), to deepen understanding and make connections through dialogue and debate. Finally, application exercises require learners to work in teams to apply their knowledge to complex real-world problems without one final answer. Discussion is critical to TBL, particularly as a large group following the GRAT and the application exercise, to support peer learning. For each TBL session, teams are kept intact to foster collaboration. Fewer than 10 studies describe specific TBL curricula in residency programs, including family medicine, internal medicine, pathology, psychiatry, physical medicine and rehabilitation, and surgery; none in pediatrics . Positive outcomes are described in learner satisfaction and engagement for the majority of learners . Due to the emphasis in TBL on covering a smaller breadth of material in greater depth, a subset of resident learners in one study felt TBL was less efficient than lectures . However, studies of TBL in residency do show knowledge gains based on resident self-assessment and significant increases in scores from the IRAT to GRAT [ – ]. The curricula vary in structure and length, with most utilizing 2–3 h blocks for TBL. Only two studies have examined TBL in one-hour conferences. One study applied the TBL structure for a monthly journal club in psychiatry with participants describing high acceptance and perceived benefit for learning clinical appraisal skills. Another study evaluating a year-long general surgery curriculum showed that the TBL format led to improved engagement of learners, greater perception of knowledge gains and the educational experiences, and higher in-training exam scores . Given the positive experiences in other specialties, we aimed to apply TBL in one-hour pediatrics conferences and evaluate feasibility, learner satisfaction, and knowledge acquisition.
We implemented TBL in one-hour conferences for a pediatrics residency program at an urban academic medical center. In February 2015, three one-hour TBL sessions were held during the residency noon conferences, replacing the traditionally utilized lectures. Table provides an overview and timeline for the sessions. The three TBL sessions were held within a two-week clinical block to maximize team consistency. Learners were divided into six teams by the facilitators based on the clinical rotation to promote team development. Teams had 4–6 members, ranging from third-year medical students to fourth-year residents; each team had approximately the same number of students, first-year residents, and upper level residents. Faculty members who attended the conferences observed and shared input but did not join teams due to irregular attendance. The two facilitators for all the sessions were a pediatrics faculty member and resident, both with experience in medical education and training in TBL through masters level coursework. The topics for the TBL sessions were selected based on existing gaps in the residency’s conference curriculum. The first session introduced learners to TBL with a team-building exercise. Two subsequent TBL sessions focused on sports physicals and menstrual disorders. Sessions were developed using the principles of backwards design . Learning objectives and a related application exercise were developed first, with a focus on aligning objectives with the residency’s curriculum and board specifications. The application exercise was developed based on the principles of 4S (significant problem, same problem, specific choice, and simultaneous reporting). Based on the objectives and application exercise, multiple choice questions and pre-reading articles were selected that supported the necessary foundational knowledge. The content of the RAT and application exercises was reviewed by faculty with topic expertise. Sessions were conducted using the standard TBL structure . Table describes the essential elements of TBL included in each session, based on the guidelines for reporting of TBL . Pre-class preparation consisted of reading a journal article about the specific topic, distributed at the prior session and via email. RATs consisted of 6–7 boards-style multiple-choice questions from the American Academy of Pediatrics’ Pediatrics Review and Education Program. After the IRAT and GRAT were completed, questions and their answers were discussed and feedback was provided within the large group. Next, teams engaged in application exercises using clinical cases about each topic, which required trainees to make diagnostic and management decisions; for example, for the sports physicals session, teams reviewed actual clinical cases of adolescent children and had to decide how they would complete the sports physical form and whether they would allow the child to participate in high school level sports. Each team concurrently worked on the same clinical case through discussion at their tables. Then, the teams simultaneously presented their solutions to the large group and discussion ensued between teams to explain clinical reasoning and debate responses. At the conclusion, facilitators gave a brief verbal summary of the topics discussed during the session. At the end of the three sessions, incentives in the form of food were given to the team who had the most points. Points were earned for the correct answers on the IRAT, GRAT, and application exercise, as judged by the chief residents. No grades were assigned for these sessions. A pre-post design was used to evaluate feasibility, learner satisfaction, and knowledge acquisition. Before the first session, participants were surveyed about their experience with TBL, based on a three-point scale (none, some, several) (Additional file ). After the last session, an anonymous questionnaire assessed residents’ reading completion rates (options: none, skimmed, half, entire article) as well as satisfaction and perceptions about engagement, knowledge acquisition, and desire for more TBL (five-point Likert scale for agreement). Likert scale data was analyzed by grouping responses into three categories: strongly agree / agree, neutral, and disagree / strongly disagree. Open-ended questions were utilized to assess strengths and areas for improvement (Additional file ). Both the pre and post-assessment were developed by the authors. Attendance and IRAT/GRAT scores were recorded. Chief residents observed all sessions to assess strengths and challenges; after the final TBL session, the facilitators debriefed with the chief residents about the sessions and recorded notes. Quantitative analysis included descriptive statistics and Pearson Correlation. Qualitative analysis was conducted for open-ended questions utilizing an interactive process based on grounded theory principles . One author reviewed responses and developed codes independently. These codes were discussed and revised by the two authors until key themes were established and agreed upon. University of Chicago IRB deemed this study exempt.
There were 47 unique participants (36 residents and 11 medical students), with 29–33 learners per session. One-third of residents (36%, 13/36) attended all three TBL sessions and an additional 27% (10/36) attended two; the medical student rotation switched during this two-week block so each medical student attended only one of the TBL sessions. Twenty-nine participants completed the pre-questionnaire (62%, 29/47) and 27 (57%, 27/47) completed the post-questionnaire; the proportion of resident respondents was 83% (pre) and 59% (post), respectively. Feasibility Most participants (55%, 16/29) were not familiar with TBL before this series. For preparation, 11% (3/27) of participants read the entire article for both sessions; more than one-third skimmed or did not read the article before the sessions. Each TBL session lasted 45 min, rather than the planned one-hour, due to participant delays from obtaining food or clinical responsibilities as well as time required to organize the teams. The facilitators regularly attempted to maintain forward flow of the conference, however at times these efforts required shortening the time discussion to ensure all TBL components were included. Learner satisfaction Overall, 66.7% (18/27) of learners were satisfied with TBL sessions (see Fig. ). Several learners appreciated the collaboration, teamwork, and critical thinking. One participant liked the “opportunity to work with residents at different levels and observe their approach to clinical scenarios.” Learners desired additional time for each TBL component, describing that they “always felt rushed” and “needed more time to discuss as a team.” They proposed fewer activities/questions per session or longer sessions. Several participants suggested removing the IRAT, while others indicated it was a strength that provided the “opportunity to test our knowledge first, prior to attempting questions together.” Learners actively participated in all TBL components. Most learners (74%, 20/27) reported more engagement during TBL than traditional conferences (see Fig. ). One resident described TBL as “more interactive than typical noon conference” and another stated they “couldn’t tune out.” Chief residents and facilitators also observed higher levels of learner engagement than lecture-based conferences. The majority (63%, 17/27) wanted more TBL, particularly those who felt more engaged and those who perceived they learned more in TBL versus traditional conferences (Pearson correlation, r = 0.914 and r = 0.771, p < 0.01). Knowledge acquisition Mean IRAT and GRAT scores were: 57.1 (SD = 12.1) and 66.7 (SD = 17.3) for sports physicals and 45.2 (SD = 26.8) and 77.8 (SD = 22.8) for menstrual disorders, respectively. Nearly half of participants (48%, 13/27) perceived they learned more with TBL, as compared to lecture-based conferences.
Most participants (55%, 16/29) were not familiar with TBL before this series. For preparation, 11% (3/27) of participants read the entire article for both sessions; more than one-third skimmed or did not read the article before the sessions. Each TBL session lasted 45 min, rather than the planned one-hour, due to participant delays from obtaining food or clinical responsibilities as well as time required to organize the teams. The facilitators regularly attempted to maintain forward flow of the conference, however at times these efforts required shortening the time discussion to ensure all TBL components were included.
Overall, 66.7% (18/27) of learners were satisfied with TBL sessions (see Fig. ). Several learners appreciated the collaboration, teamwork, and critical thinking. One participant liked the “opportunity to work with residents at different levels and observe their approach to clinical scenarios.” Learners desired additional time for each TBL component, describing that they “always felt rushed” and “needed more time to discuss as a team.” They proposed fewer activities/questions per session or longer sessions. Several participants suggested removing the IRAT, while others indicated it was a strength that provided the “opportunity to test our knowledge first, prior to attempting questions together.” Learners actively participated in all TBL components. Most learners (74%, 20/27) reported more engagement during TBL than traditional conferences (see Fig. ). One resident described TBL as “more interactive than typical noon conference” and another stated they “couldn’t tune out.” Chief residents and facilitators also observed higher levels of learner engagement than lecture-based conferences. The majority (63%, 17/27) wanted more TBL, particularly those who felt more engaged and those who perceived they learned more in TBL versus traditional conferences (Pearson correlation, r = 0.914 and r = 0.771, p < 0.01).
Mean IRAT and GRAT scores were: 57.1 (SD = 12.1) and 66.7 (SD = 17.3) for sports physicals and 45.2 (SD = 26.8) and 77.8 (SD = 22.8) for menstrual disorders, respectively. Nearly half of participants (48%, 13/27) perceived they learned more with TBL, as compared to lecture-based conferences.
Our study is one of the few to apply TBL in one-hour residency conferences. We demonstrate it leads to greater satisfaction and engagement among learners compared with traditional lectures, however substantial time constraints limit its feasibility during one-hour conferences. These results support findings of prior residency-based TBL studies and align with residents’ preferences for active learning . It proved challenging to incorporate all TBL components in a one hour conference, in part because sessions were limited to 45 min in our real-world application. Due to the time limitations, discussion was truncated, potentially limiting positive outcomes. This challenge has not been previously described, because TBL has traditionally been utilized in 2–3-h blocks . However, importantly, the general surgery curriculum sessions utilizing TBL were extended from 1 h to 1.75 h after the first year, suggesting similar time pressures . The impact of time suggests that TBL may be better suited for longer conferences to ensure that learners attain the learning objectives. Alternatively, future studies can explore TBL adaptations to avoid curtailing discussion, such as limiting topic scope or completing IRAT before the conferences. Despite learner engagement during sessions, pre-conference preparation rates were low, consistent with other residency-based TBL studies . Given busy residency schedules, learners may lack motivation or time to read, leading to poor compliance. Effective approaches to support knowledge acquisition pre-conference must be delineated . Videos for individuals to review may help increase completion, or alternatively team-oriented pre-work can be considered as it may foster peer pressure . Further, because grades carry less relevance in graduate medical education, relevant motivators at the resident level must be considered that can incentivize completion of the pre-conference preparation. Acceptability of preparation may also increase as participants gain familiarity with TBL . Participants had mixed perceptions about whether they learned more in TBL versus lectures, suggesting tension between breadth and depth of content . TBL emphasizes knowledge application, promoting depth at the expense of breadth, in contrast to lectures. However, depth may have been limited in our sessions due to the time constraints. Breadth and depth should be balanced in educational conferences. Future studies should examine if TBL designed for a traditional noon conference will perform similarly to the typical lectures presented within these conferences. This curriculum was piloted in one residency program at an academic medical center, thus limiting generalizability. Facilitators had not previously led TBL but participated in faculty development, thus had experience similar to faculty who may adopt TBL for conferences. Because preparation was low among residents and students, the failure to develop accountability among learners to complete the preparatory work limits potential knowledge acquisition and overall impact of the TBL sessions; thus, consideration must be given to motivators to improve preparation. Finally, objective tools to assess knowledge and engagement were not utilized in this study and may have shown concordant or discordant results; future studies should incorporate such tools to further compare individuals’ perceptions about knowledge acquisition and engagement with objective findings.
Our study shows TBL has the potential to foster more active learning, learner engagement, and knowledge acquisition than lectures during one-hour conferences; however, it is not feasible in its current design. Future work is needed to adapt TBL to better fit constraints of one-hour sessions, encourage pre-conference preparation, and evaluate the impact of TBL on knowledge retention and teamwork skills among residents.
Additional file 1: Appendix 1 - Pre-assessment for Team-Based Learning During Residency Noon Conference. (DOCX 17 kb) Additional file 2: Appendix 2 - Post-Assessment for Team-Based Learning During Residency Noon Conference. (DOCX 20 kb)
|
Primary implant stability of two implant macro-designs in different alveolar ridge morphologies: an in vitro study | b1eb99fa-4717-443f-a149-1914b2a022fa | 11885739 | Dentistry[mh] | Dental implants are a well-established and reliable option for replacing missing teeth in both partially and fully edentulous patients. The stability of dental implants and their long-term success are ensured through the process of osseointegration . Osseointegration refers to the direct structural and functional connection between living bone and the surface of a load-bearing implant . This process requires achieving primary stability at the time of implant placement, followed by undisturbed wound healing, facilitating a series of critical biological events that culminate in osseointegration and peri-implant tissue stability . Primary implant stability during placement is attained through the direct mechanical engagement with the surrounding alveolar bone . Over the course of 4 to 8 weeks, primary stability is gradually superseded by secondary stability, which is driven by a biological bone remodeling around the implant . Insufficient primary stability may jeopardize the process of osseointegration, as micromovements between implant and surrounding bone exceeding 100 μm potentially disrupt bone healing and lead to fibrous encapsulation rather than osseointegration . Comprehensive treatment planning for failing teeth and dental implant therapy is complex, encompassing numerous factors such as the choice of the ideal implant design characteristics, the appropriate timing of implant placement following tooth extraction and the subsequent loading protocols. The selection of these treatment options should aim to predictably achieve long-term treatment success, including optimal esthetic outcomes and a low risk of complications, while also striving to reduce the number of surgical and clinical procedures, whenever feasible . As patient interest in shorter treatment times continues to grow, immediate implant placement has gained popularity, particularly when paired with immediate restoration, with or without immediate loading . However, the success of immediate protocols depends significantly on achieving high primary stability at the time of implant placement, which is often challenged by local morphological factors when comparing implant engagement in fresh extraction sockets versus late implant placement in healed alveolar ridges . Several additional factors also influence primary implant stability, including alveolar bone density and dimensions, implant design characteristics, and surgical technique . Although the precise threshold of adequate primary stability for immediate restoration or loading remains unclear, a minimum insertion torque of 35 Ncm during implant placement is frequently recommended . To address this challenge of adequate primary stability, particularly in immediate implant placement scenarios, implants with modified macro-designs have been developed in recent years. These modifications, which include changes to implant shape, surface topography, and thread design (depth, pitch, and shape), are intended to enhance primary stability . While a recent review suggested only minimal differences in primary stability between tapered and non-tapered implants , multiple in vitro and in vivo studies indicate that tapered designs generally provide higher primary stability compared to cylindrical implants . Despite these findings, there is only limited information on the effect of alveolar ridge morphology on primary implant stability and the influence of various implant macro-designs. Consequently, there is a need for recommendations on selecting specific implant specifications tailored to different clinical scenarios involving immediate placement and loading protocols. Therefore, the primary aim of this in vitro investigation was to assess the influence of two different alveolar ridge morphologies on the primary implant stability. The secondary aim was to assess the impact of two implant macro-designs on primary stability and to examine the reliability of resonance frequency analysis (RFA) in comparison to final insertion torque as a measure for primary implant stability. The null hypotheses were as follows: alveolar ridge morphology (H01), implant macro-design (H02), and their interactions (H03) do not influence primary implant stability during implant placement.
Models and virtual implant planning The present in vitro study was designed and conducted in the Department of Oral Surgery and Stomatology at the University of Bern, Switzerland from November 2021 to February 2022. Standardized partially edentulous models mimicking a cortico-spongious alveolar bone density D2 were used (BoneModels, Castellón de la Plana, Spain) . Each model presented six single-tooth edentulous sites corresponding to the FDI teeth positions 16, 14 and 25 simulating healed alveolar ridge morphologies, and to the FDI teeth positions 12, 21 and 23 simulating fresh extraction sockets (Fig. ). For each model, a virtual implant planning was performed in a dedicated software package (coDiagnostiX 10.5, Dental Wings Inc, Montreal, Canada) based on a Cone Beam Computed Tomography (CBCT) scan (8 × 5 cm, 80 μm voxel size, 90kVp, 1mAs; 3D Accuitomo 170, J. Morita Corp, Osaka, Japan) and a surface scan using a laboratory scanner (3Shape 4, 3Shape Inc, Copenhagen, Denmark). After superimposing the files, the ideal 3D implant position for each site was planned based on a digital wax-up (Zirkonzahn. Modellier, Zirkonzahn GmbH, Gaus, Italy) for screw-retained single implant crowns by an experienced clinician (C.R). In extraction socket sites, an apical implant engagement of at least 4 mm was respected. Subsequently, the surgical guide was designed with a material thickness of 3.5 mm and a guide-to-tooth offset of 0.15 mm. Multiple fenestrations were included to allow for a visual verification of the guides fit on the model. The guides were manufactured for each model using a transparent, light-cured resin for stereolithography (ProArt Print Splint, Ivoclar Vivadent AG, Schaan, Lichtenstein) in a 3D printer (PrograPrint PR5, Ivoclar Vivadent AG, Schaan, Lichtenstein). Guided implant placement and study groups To recreate the clinical scenario as closely as possible, the models were mounted in phantom-heads. Afterwards, fully guided sCAIS procedures according to the manufacturer’s protocols were carried out using a surgical motor (iChiropro, Bien-Air, Bienne, Switzerland). The study involved two bone-level type implants, each with distinct macro-design features (Fig. ): Shallow-threaded parallel-walled implant body with a thread pitch of 0.8 mm (BL 4.1 × 12 mm RC, Straumann AG. Basel, Switzerland), representing a conventional design available for decades to address a broad range of clinical indications, and Deep-threaded tapered implant body with a thread pitch of 2.25 mm (BLX 4.0 × 12 mm RB, Straumann AG. Basel, Switzerland), a recently introduced design intended to achieve high primary stability, particularly in immediate implant placement protocols. These implants were randomly assigned to the edentulous sites, ensuring equal sample sizes for each group. Measurement of primary implant stability The primary stability of all the implants was assessed using the following two methods: Continuous measurement of the insertion torque (Ncm) over time during implant placement using the surgical motor (iChiropro, Bien-Air, Bienne, Switzerland); and Resonance Frequency Analysis (RFA) after final implant placement using hand-tightened implant-specific transducers and a RFA device (Osstell ISQ, Integration Diagnostics Ltd., Goteborgsvagen, Sweden). The RFA assessment was conducted three times in both the mesio-distal and bucco-lingual orientations, recording the lowest value from each orientation. The mean of these two lowest values was then calculated (Fig. ). Statistical analysis The primary outcome of the present study was the comparison of final torque values between the different alveolar ridge morphologies, followed by the secondary outcome of the same variables for the different implant macro-designs. Finally, a correlation between RFA and final torque values was investigated. All collected data was presented as mean and standard deviation (SD). Two-way analysis of variance (ANOVA) was used for the primary and secondary outcomes to verify the effects of the independent variables (alveolar ridge morphology and implant macro-design) on the dependent variables (torque and mean RFA). Main and interaction effects were tested, and multiple comparisons used Sidak’s post hoc. Effect sizes and observed power were calculated, and interaction plots were designed. The correlation between torque and mean RFA was performed using Pearson’s bivariate correlation coefficient. All the analyses were carried out using IBM SPSS v.26 software, adopting a significance level of 5%.
The present in vitro study was designed and conducted in the Department of Oral Surgery and Stomatology at the University of Bern, Switzerland from November 2021 to February 2022. Standardized partially edentulous models mimicking a cortico-spongious alveolar bone density D2 were used (BoneModels, Castellón de la Plana, Spain) . Each model presented six single-tooth edentulous sites corresponding to the FDI teeth positions 16, 14 and 25 simulating healed alveolar ridge morphologies, and to the FDI teeth positions 12, 21 and 23 simulating fresh extraction sockets (Fig. ). For each model, a virtual implant planning was performed in a dedicated software package (coDiagnostiX 10.5, Dental Wings Inc, Montreal, Canada) based on a Cone Beam Computed Tomography (CBCT) scan (8 × 5 cm, 80 μm voxel size, 90kVp, 1mAs; 3D Accuitomo 170, J. Morita Corp, Osaka, Japan) and a surface scan using a laboratory scanner (3Shape 4, 3Shape Inc, Copenhagen, Denmark). After superimposing the files, the ideal 3D implant position for each site was planned based on a digital wax-up (Zirkonzahn. Modellier, Zirkonzahn GmbH, Gaus, Italy) for screw-retained single implant crowns by an experienced clinician (C.R). In extraction socket sites, an apical implant engagement of at least 4 mm was respected. Subsequently, the surgical guide was designed with a material thickness of 3.5 mm and a guide-to-tooth offset of 0.15 mm. Multiple fenestrations were included to allow for a visual verification of the guides fit on the model. The guides were manufactured for each model using a transparent, light-cured resin for stereolithography (ProArt Print Splint, Ivoclar Vivadent AG, Schaan, Lichtenstein) in a 3D printer (PrograPrint PR5, Ivoclar Vivadent AG, Schaan, Lichtenstein).
To recreate the clinical scenario as closely as possible, the models were mounted in phantom-heads. Afterwards, fully guided sCAIS procedures according to the manufacturer’s protocols were carried out using a surgical motor (iChiropro, Bien-Air, Bienne, Switzerland). The study involved two bone-level type implants, each with distinct macro-design features (Fig. ): Shallow-threaded parallel-walled implant body with a thread pitch of 0.8 mm (BL 4.1 × 12 mm RC, Straumann AG. Basel, Switzerland), representing a conventional design available for decades to address a broad range of clinical indications, and Deep-threaded tapered implant body with a thread pitch of 2.25 mm (BLX 4.0 × 12 mm RB, Straumann AG. Basel, Switzerland), a recently introduced design intended to achieve high primary stability, particularly in immediate implant placement protocols. These implants were randomly assigned to the edentulous sites, ensuring equal sample sizes for each group.
The primary stability of all the implants was assessed using the following two methods: Continuous measurement of the insertion torque (Ncm) over time during implant placement using the surgical motor (iChiropro, Bien-Air, Bienne, Switzerland); and Resonance Frequency Analysis (RFA) after final implant placement using hand-tightened implant-specific transducers and a RFA device (Osstell ISQ, Integration Diagnostics Ltd., Goteborgsvagen, Sweden). The RFA assessment was conducted three times in both the mesio-distal and bucco-lingual orientations, recording the lowest value from each orientation. The mean of these two lowest values was then calculated (Fig. ).
The primary outcome of the present study was the comparison of final torque values between the different alveolar ridge morphologies, followed by the secondary outcome of the same variables for the different implant macro-designs. Finally, a correlation between RFA and final torque values was investigated. All collected data was presented as mean and standard deviation (SD). Two-way analysis of variance (ANOVA) was used for the primary and secondary outcomes to verify the effects of the independent variables (alveolar ridge morphology and implant macro-design) on the dependent variables (torque and mean RFA). Main and interaction effects were tested, and multiple comparisons used Sidak’s post hoc. Effect sizes and observed power were calculated, and interaction plots were designed. The correlation between torque and mean RFA was performed using Pearson’s bivariate correlation coefficient. All the analyses were carried out using IBM SPSS v.26 software, adopting a significance level of 5%.
Study sample A total of 144 implants (BL n = 72, BLX n = 72) were equally distributed to static computer-assisted implant placement in single tooth sites with healed alveolar ridge ( n = 72) or extraction socket morphology ( n = 72) in 36 models. Alveolar ridge morphology Higher final torque values were observed when implants were placed in healed ridge sites compared to extraction sockets ( p < 0.001). Notably, the insertion torque increased linearly, with a steeper incline in healed ridge sites compared to extraction sockets (Fig. ). Similarly, higher mean RFA values were observed for implants in healed ridges compared to extraction sockets ( p < 0.001). A positive and statistically significant correlation was found between final insertion torque and mean RFA values ( r = 0.742; p < 0.001) as illustrated in Fig. . Descriptive statistics and corresponding box plots are displayed in Table ; Fig. . The main effects and multiple comparisons between implant type and alveolar ridge morphology for mean RFA and final torque are presented in Tables and . Implant macro-design Higher final torque values were observed in BL implants compared to BLX implants ( p < 0.001). BL implants exhibited a more linear torque increase in healed sites, whereas BLX implants showed a more progressive torque formation curve (Fig. ). Similarly, higher mean RFA values were recorded for BL implants compared to BLX implants ( p < 0.001). Descriptive statistics and corresponding box plots are displayed in Table and Fig. . The main effects and multiple comparisons between implant type and alveolar ridge morphology for mean RFA and final insertion torque are illustrated in Tables and . Interactions The alveolar ridge morphologies were compared among each implant macro-design group. The BL implants presented statistically significant higher final torque and mean RFA values in healed sites compared to extraction socket sites ( p < 0.001). Also, in the BLX implant group the results were statistically significant and higher final torque and mean RFA values were observed in healed sites compared to extraction socket sites ( p < 0.001). Conversely, the implant macro-design was analyzed according to the alveolar ridge morphologies. When placed in extraction socket sites, the BL implants presented statistically significant higher final torque and mean RFA compared to BLX implants ( p < 0.001). Single outliers in torque and RFA values were observed in both the BL and BLX groups at socket sites, reflecting the challenging anatomical features that potentially compromise the predictability of primary stability in immediate implant placement (Fig. ). When placed in healed sites, statistically significant higher final torque values for BL implants compared to BLX implants could be achieved ( p = 0.037). However, no statistically significant difference was observed between the mean RFA values of BL and BLX implants in fully healed sites (Tables and , and Table ). The interactions of implant type and alveolar ridge morphology had a statistically significant effect on the final torque ( p = 0.025) and on the mean RFA ( p = 0.003).
A total of 144 implants (BL n = 72, BLX n = 72) were equally distributed to static computer-assisted implant placement in single tooth sites with healed alveolar ridge ( n = 72) or extraction socket morphology ( n = 72) in 36 models.
Higher final torque values were observed when implants were placed in healed ridge sites compared to extraction sockets ( p < 0.001). Notably, the insertion torque increased linearly, with a steeper incline in healed ridge sites compared to extraction sockets (Fig. ). Similarly, higher mean RFA values were observed for implants in healed ridges compared to extraction sockets ( p < 0.001). A positive and statistically significant correlation was found between final insertion torque and mean RFA values ( r = 0.742; p < 0.001) as illustrated in Fig. . Descriptive statistics and corresponding box plots are displayed in Table ; Fig. . The main effects and multiple comparisons between implant type and alveolar ridge morphology for mean RFA and final torque are presented in Tables and .
Higher final torque values were observed in BL implants compared to BLX implants ( p < 0.001). BL implants exhibited a more linear torque increase in healed sites, whereas BLX implants showed a more progressive torque formation curve (Fig. ). Similarly, higher mean RFA values were recorded for BL implants compared to BLX implants ( p < 0.001). Descriptive statistics and corresponding box plots are displayed in Table and Fig. . The main effects and multiple comparisons between implant type and alveolar ridge morphology for mean RFA and final insertion torque are illustrated in Tables and .
The alveolar ridge morphologies were compared among each implant macro-design group. The BL implants presented statistically significant higher final torque and mean RFA values in healed sites compared to extraction socket sites ( p < 0.001). Also, in the BLX implant group the results were statistically significant and higher final torque and mean RFA values were observed in healed sites compared to extraction socket sites ( p < 0.001). Conversely, the implant macro-design was analyzed according to the alveolar ridge morphologies. When placed in extraction socket sites, the BL implants presented statistically significant higher final torque and mean RFA compared to BLX implants ( p < 0.001). Single outliers in torque and RFA values were observed in both the BL and BLX groups at socket sites, reflecting the challenging anatomical features that potentially compromise the predictability of primary stability in immediate implant placement (Fig. ). When placed in healed sites, statistically significant higher final torque values for BL implants compared to BLX implants could be achieved ( p = 0.037). However, no statistically significant difference was observed between the mean RFA values of BL and BLX implants in fully healed sites (Tables and , and Table ). The interactions of implant type and alveolar ridge morphology had a statistically significant effect on the final torque ( p = 0.025) and on the mean RFA ( p = 0.003).
The present in vitro study examined the primary stability of implants with two different macro-designs placed into simulated fresh extraction sockets compared to healed alveolar ridges. The results of this investigation demonstrate higher final torque and RFA values in fully healed compared to extraction socket sites and for BL compared to BLX implants. The final insertion torque and RFA values were positively correlated, demonstrating the reliability of RFA values in implant stability assessment. Therefore, H01, H02, and H03 were rejected. The present study demonstrates that the morphology of the alveolar ridge significantly impacts primary implant stability, with extraction sockets demonstrating lower final implant insertion torque and RFA values compared to healed alveolar ridges. This is in line with the results from a clinical trial, reporting insertion torques of 65.5 Ncm versus 53.7 Ncm and RFA values of 72.8 versus 63.9 for healed sites as compared to extraction sockets . Similarly, another in vitro study reported insertion torques of 49 Ncm versus 28 Ncm and RFA values of 62 versus 53 for full embedment in bone compared to circular defects . The significantly lower primary implant stability in extraction socket sites might be attributed to the incomplete embedding in bone . To achieve sufficient primary stability in these cases, it is recommended that the implant osteotomy extend 3–4 mm apically beyond the socket, or modify the drilling protocol, underpreparing the osteotomy . Contrarily, implants in healed alveolar ridges are fully embedded in bone, a factor also contributing to implant positioning accuracy . Significantly higher positional deviations between planned and final implant positions, pointing to the zone of less resistance, were found for extraction socket sites . These deviations may affect apical implant engagement and, consequently, primary implant stability . While higher primary implant stability values are a prerequisite for immediate loading protocols, excessively high insertion torque does not necessarily enhance the process of osseointegration . In fact, high insertion torques could induce pronounced local bone necrosis, potentially compromising osseointegration . Conversely, and in conjunction with conventional implant loading, low insertion torque values do not negatively affect osseointegration as long as implant stability remains above 10 Ncm . In addition to local anatomical characteristics, the macro-design of the implant plays a significant role in achieving primary stability during implant placement . Interestingly, the present study demonstrated lower primary implant stability for BLX implants compared to BL implants across both simulated clinical scenarios. These results are supported by an in vitro study that reported higher RFA and final torque values for BL implants across various bone densities compared to BLX implants . Contrarily, an ex vivo study reported higher RFA and final torque values for BLX implants compared to BL implants in low-density scenarios using cancellous iliac porcine crest blocks . Despite the differences observed in the present study, both implant designs provided sufficient primary stability for conventional loading protocols in extraction sockets, as final torque values exceeded the 10 Ncm threshold . However, neither the BL nor BLX design met the recommended 35 Ncm threshold for immediate loading in this study . Interestingly, in healed sites, the influence of implant design on primary stability was less significant, with both designs potentially qualifying for immediate implant loading protocols. The higher primary implant stability of BL implants may be attributed to their smaller thread pitch compared to BLX implants. A smaller thread pitch increases the implant surface area, leading to greater bone-to-implant contact and enhanced mechanical anchorage . Additionally, the core diameter of the BLX implant (3.5 mm) is significantly smaller than that of the BL implant (4.1 mm). Increased implant diameters and non-self-cutting threads are also associated with higher primary implant stability . Conversely, tapered implant body designs have been suggested to achieve higher primary implant stability than cylindrical-shaped implants . This is likely due to greater compression of the surrounding bone, which may provide favorable stress on the tissue and reduces the risk of micromovement . Therefore, an under-preparation of the implant bed for tapered BLX implants could potentially result in higher primary stability and might reach the threshold for immediate implant loading. This is supported by a recent randomized controlled study, demonstrating significantly higher primary stability for implants placed in sites with under-preparation compared to those inserted following a conventional drilling sequence . Consequently, the implant specifications, macro-design, and osteotomy protocols should be tailored to the individual site-specific tissue characteristics . The implant designs investigated in this project were suitable for conventional loading protocols in both clinical scenarios, with BL implants consistently demonstrating higher primary stability. However, selecting BLX implants may be advantageous in cases where anatomical restrictions in the apical region of the osteotomy favor the use of a tapered implant design. The findings of this study indicate that achieving primary stability compatible with immediate loading protocols during immediate implant placement was not predictable for either implant design. Therefore, this treatment protocol should be limited to carefully selected cases, with conventional loading recommended in situations where primary stability is uncertain. Primary implant stability is commonly assessed at the time of placement using insertion torque. However, this method is limited to a single-point measurement, as repeated assessments would disrupt the osseointegration process. RFA offers an alternative, allowing for non-invasive monitoring of implant stability post-placement by providing an Implant Stability Quotient score, ranging from 1 to 100 . In this study, both final insertion torque and RFA values were recorded, and a positive, statistically significant correlation between the two was observed. This finding aligns with prior research from both in vitro and clinical studies, which also report a positive correlation between final insertion torque and RFA values . These results support RFA as a reliable tool for evaluating primary implant stability, particularly when compared to insertion torque. However, caution is warranted in long-term monitoring, as conflicting evidence exists regarding the relationship between RFA measurements, marginal bone loss, and other clinical parameters . Several limitations of this study should be acknowledged. First, as an in vitro study, the generalizability of the results is limited, and caution is needed when extrapolating these results to clinical scenarios. The acrylic models used mimic the D2 density of human cortico-spongious bone but do not fully replicate the clinical environment with its complexities at a specific location in the alveolar ridge and in the variety of the different sites throughout the maxilla and mandible. Further, anatomical limitations, such as limited vertical and horizontal bone, can occur in clinical situations and are not considered in this study. This could require bone augmentation procedures or selection of narrower implant diameters and shorter implants, potentially leading to reduced primary implant stability. Second, this study compared two implants with multiple differing macro-design features, potentially obscuring the individual effects of each feature, making it difficult to attribute the outcomes to a specific design characteristic. Additionally, adjustments to implant specifications, such as using longer implants for enhanced apical engagement or wider implants for increased lateral bone engagement would influence the primary stability. Third, only one bone density and drilling protocol were examined, leaving the influence of other factors unclear. Future studies should explore a broader range of bone densities and include different alveolar ridge morphologies with horizontal and vertical bone defects and their impact on primary implant stability. Further, different surgical techniques should be considered, regarding the potential of reaching the threshold for immediate implant loading in immediate placement procedures. Additionally, investigating implants with singularly distinct macro-design features would provide more clarity. Clinical validation is needed to eradicate the limitations of generalizability and recommended to assess osseointegration and secondary implant stability over time during follow-up periods.
Within the limitations of this in vitro study, it can be concluded that: Implants inserted in healed alveolar ridges show higher final insertion torques and RFA values as compared to fresh extraction sockets. BL implants were found to have higher final insertion torques and RFA values compared to BLX implants in both simulated clinical scenarios. RFA was shown to be a reliable and repeatable method to assess primary implant stability as compared to the insertion torque values.
|
The European influence on workers' compensation reform in the United States | bf4a1775-9a4c-4a08-a54d-6c861c2166f2 | 3267658 | Preventive Medicine[mh] | Workers' compensation law in the United States is derived from European social insurance. It has evolved at the federal and state levels over the past century through a long series of reform (or redesign) initiatives. There have been many hundreds of minor redesigns implemented by the state programs, the net result of which is a workers' compensation system that fails to provide the required benefits to workers. Recent reform initiatives in United States draw heavily on European workers' compensation systems, yet the European model does not offer the possibility of reform that is more than a continuation of the redesign process. The European Model Workers' compensation is a European concept, dating back to German Chancellor Otto von Bismarck. By the turn of the 20 th century, all European countries had workers' compensation laws. The German law required employees to pay part of the costs and called for highly centralized administration. Its coverage was broad, was compulsory, and provided for nonprofit mutual employers' insurance funds . Most industrialized nations now have national workers' compensation programs based on the German model . The British law embodied a different approach. It was elective, administration was left to the courts, and insurance was carried through private firms. The German system was closely linked to the rest of the social insurance system. It provided for accident prevention, medical treatment, and rehabilitation, whereas the British scheme did none of these things . The British system was troubled from the outset by disputes over which jobs and what industries were to be covered, resulting in the litigation that the system had been intended to replace. Workers' Compensation in the United States In the United States, two separate and unequal workers' compensation systems, federal and state, function in complete separation. The federal system, under the Federal Employees' Compensation Act (FECA), was designed and operated following the German model. FECA covers 2.9 million federal employees in more than 70 different agencies along with a number of other worker groups adopted by Congress in various acts of expansion of the federal authority. FECA provides benefits without delay, and moves disabled workers to other government programs in a non-adversarial system. It provides social insurance that most European countries would recognize as equal to their own. The FECA program operates without competition. The Secretary of Labor has exclusive jurisdiction over the entire program, including the several appeal and review processes. The Department of Labor (DOL) has few constraints on what it charges the federal agencies, passing on its costs plus, in some cases, a fee based on a pro rata share of administrative costs. Most federal agencies include workers' compensation costs in their annual appropriation requests to Congress, which makes the costs difficult to discern . The United States, late in accepting workers' compensation, allowed the individual states to develop separate and unequal programs. The state laws were influenced much more by the English system than the German one. Like the original English Poor Law, prevention of poverty, not prevention of disability and its social management, was the driving concern for the development of the state workers' compensation programs . The medical profession was a major opponent of compulsory social health insurance. The resulting fragmented workers' compensation arrangements now offered to most workers in state plans bear little resemblance to the federal program. The state workers' compensation system has been confrontational with workers throughout its history, with benefits that are far from adequate. State programs pay for less than one third of the total costs of occupational injuries and illnesses, shifting most of these costs to the workers, their families, private medical insurance, and Medicare and Medicaid . Ten times as many severely disabled occupational disease victims receive Social Security Disability Insurance (SSDI) or early retirement benefits as receive workers' compensation benefits . Although the state programs may appear to approximate the FECA program, there are major deficiencies in coverage and benefits they offer. Occupational injuries and illnesses, if accurately reported, would be among the five leading causes of morbidity and mortality in the United States . These injuries and diseases, along with their fatalities, result in costs that are well over three times the published expenditures, and roughly 3% of the U.S. gross domestic product (GDP) . Most of the costs of occupational disease are not covered by workers' compensation . Only about one in twenty severely disabled occupational disease victims receives workers' compensation benefits. For occupational cancers, it is fewer than one in a hundred. Workers' Compensation in Europe In Europe, social democracies--a mixed system that balances market forces with government assistance--have flourished since World War II. While the United States devotes only 11 percent of its GDP to redistributing income by way of social benefits, the countries of the European Union contribute more than 26 percent . In Europe, there is a tradition of strong labor political organizations. With the EU's large support bases for workers' compensation, reforms are not urgent necessities there as they are elsewhere. Nonetheless, significant variations exist among EU countries. Social protection represents more than 30% of the GDP in Sweden, Denmark, France, and Germany, yet less than 15% in the Baltic countries. The United Kingdom spends a much smaller proportion of its GDP on occupational injury and disease benefits than other European countries . The new Conservative government is instituting a program of reassessing individuals receiving disability benefits. Three fourths of those evaluated in the early stages of the program have been found fit for work under new evaluation guidelines . The basic criteria of the workers' compensation systems in European countries are similar. There are two distinct approaches. The first is that of the German system, with self-governed insurance associations funded by employers' contributions providing a comprehensive prevention, rehabilitation, and compensation service. In the second approach, the state administers the system for compensating occupational injuries and disease as part of its wider provision for social security and collects from employers the sums necessary to finance it. Many European countries now have mixtures of the state and private insurance systems . While fewer than half of American workers are covered by short-term disability insurance, all workers in EU countries are covered against the risk of wage loss due to temporary sickness through government agencies. Wage-replacement schemes consist of social insurance covering the loss of earnings due to old age, unemployment, temporary sickness, or permanent disability. In all of the EU countries except The Netherlands, disability social security schemes are separated from compensation for occupational injuries. Coverage typically lasts up to a year, with transition to the longer-term disability insurance programs if needed. In The Netherlands, partially disabled unemployed workers are given the same benefits as totally disabled workers . This offers considerable advantage over the failed attempts in the United States to deal with partial disability. The level of compensation has a profound influence on utilization. In The Netherlands, the rate of disability in the working-age population is close to 9%, compared with an average of 6% in other European countries. Forty percent of persons with disabilities become long-term unemployed . In response, comprehensive reforms have increased employer responsibilities over the past decade, and now provide a more limited benefit package. Employers are provided incentives to recruit disabled workers in order to reintegrate them into the work setting . Emphasis is placed on returning workers with injury and illness to acceptable jobs, and on improving the work environment to prevent recurrence. Germany provides an example of the legally mandated role of the insurance organizations to provide specific initiatives on rehabilitation, prevention, regulation, and regulatory inspection. It is compulsory for employers to adapt the working conditions and/or to find a new work activity in the same company . European countries have adopted lists of occupational diseases that are typically appended to regulatory provisions, thus ensuring the responsibility of the state. This is a concept that has never taken hold in the United States and is unlikely to be adopted. The roles of the occupational disease lists in determining compensation vary significantly. With the exception of Sweden, all EU countries have a mix between a "closed" system with a list including a certain number of diseases, and an "open" system. In Sweden there is an open system in which each claim for benefits is treated on its own merits, where all illnesses that could possibly arise from workplace exposures are considered. France has a more pragmatic closed system and lists 112 occupational diseases that require specific symptoms or pathological lesions to be present, from work that is known to cause the disease, and also specifies the time limits for compensation claims. In other EU countries, the interpretation and the use of lists are within systems that contain elements of both. The cost of compensating occupational diseases accounts for the majority of the total costs of compensation in European countries . Workers' compensation systems in Europe continue to rely on physicians and other experts to determine who receives benefits . Determining the causes of occupational diseases involves a review of epidemiological and other scientific and medical evidence, and often the agreement of expert consultants as to the increased risks resulting from the occupational exposures. Significant differences exist in the established and applied diagnostic and exposure criteria in the EU countries . There are differences concerning the extents to which claimants must show evidence of work exposure leading to disease. In Belgium, Italy, and Luxembourg, there is a presumption of cause. It is sufficient for victims to demonstrate that they are suffering from listed diseases and that they have been exposed to corresponding risks or that they have done jobs specified by the lists. Similarly, in France, the list of occupational diseases is considered to be a presumption of cause. In Austria, Denmark, Finland, Germany, Switzerland, Portugal, Spain, and Switzerland, the lists serve merely as guides to insurance organizations investigating the claims. Insurers will seek to establish whether a disease could have been caused by an agent on the national list while at the same time searching to find whether there are non-occupational factors that could have caused it . Reform Proposals in Europe Reforms of workers' compensation systems are being considered in various European countries. The European Commission's Community Strategy on Health and Safety at Work proposes to significantly reduce the incidence rates of occupational injuries and illnesses by 2012. To achieve this, EU member countries must implement health and safety regulations in national legislation. Enforcement is essential, especially in small and medium-sized enterprises, where member countries must take direct measures to ensure compliance with legislation, such as inspection and the issuing of penalties . Trade unions direct much of the discussion of reform, alleging that most occupational diseases are still ignored by the compensation systems, with under-reporting, inadequate monitoring, and a resultant lack of compensation. "Under-recognition of occupational diseases is common to all EU countries. Its most immediate consequence is a wholesale transfer of resources to the employer's benefit, with much of the cost burden being shared between victims (loss of pay as a result of reassignment or firing), and general health budgets (social security coverage of diseases, disability, and unemployment, national health system, etc.)" . European countries share the concern that exclusion of people with health problems or disabilities from the labor market contributes to an increasing dependence on health-related benefits. This in turn puts pressure on the larger social protection system . Even though EU member countries have developed many of the most successful workers' compensation programs in the world, the trade unions and other groups propose that substantial reforms are needed. They emphasize that • All occupational diseases should be recognized and compensated as reliably as are occupational injuries. Procedural reform is a clear priority throughout Europe to remedy the inefficiency of the mixed system. Changes to the criteria that define occupational diseases indirectly challenge the causal presumption and should be removed from the existing schedules. Revision of schedules and recognition of new occupational diseases should be enhanced. There must be a shift in the onus of proof in light of epidemiological evidence that some occupations involve major risks of exposures to certain hazards and diseases. • Prevention should be emphasized for chronic diseases with delayed onsets, and diseases with long latency periods such as occupational cancers. Health surveillance must be more than just medical surveillance. It must embrace surveillance of risks and exposure, the latter being particularly critical for the recognition of occupational cancers and other long-latency diseases. • Trade union participation should be expanded so that trade unions are informed about companies with unacceptable health and safety records. In Spain, the campaigns of trade unions have led to the establishment of regional safety representatives and labor inspectors, bringing about needed improvements in occupational health and safety. • Perceived inadequacies in levels of compensation available through social insurance combined with perceptions of injustice over employer immunity from redress under the civil law are leading to some reorientation of national "no fault" compensation systems towards a closer fit with civil law models. • Criminal lawsuits should be reinstated where they would have salutary effects on occupational health and safety. Italy provides an example of a country where the national legal service is actively involved in identifying the occupational origins of diseases. Law officers specialized in workplace health issues work in conjunction with trade unions and victim support groups. • The linking of occupational health and safety with public health should be enhanced to focus public health action on reducing social health inequities caused by working conditions. A public health approach to workplace health requires both political and legal changes and an extension of the spheres of preventive activities. It raises the issue of social control of the conditions of production to be consistent with both human and environmental health. It brings the concept of sustainability into the evaluation of working conditions more than ever before . • Compensation systems should be integrated into a global health and safety strategy. The rapid rise in occupational injuries and diseases in developing countries cannot be ignored. The export of high-risk activities to countries in Asia, Africa, and Latin America is unacceptable. Global regulation is not nearly sufficiently developed. Many European reforms are moving in the direction of economic incentives that reward organizations that develop and maintain safe and healthful working environments. "European workers'compensation systems always provide a combination of pure insurance functions and government regulation. The weak point of most of the compensation systems is, however, a lack of simple correlation between preventive activities and financial benefits" . There are important differences in the institutional assets, in the compensation of occupational diseases, and in the kinds of incentives used in different countries regardless of the social insurance system. The fundamental difference between countries is whether the workers' compensation system is based on a competitive market between private insurance companies or on a monopoly structure, where employers cannot choose among insurance providers, Of the 27 EU countries, 19 have monopoly systems. Subsidy systems, tax incentives, and insurance-based "experience rating" are theoretically possible in all EU countries . The European Agency for Safety and Health at Work (EU-OSHA) concludes that "In competitive insurance markets, effort-based incentives are more difficult to achieve. A possible solution could be the introduction of long-term contracts or the creation of a common prevention fund, financed equally by all insurers" . Few countries outside Europe can attain such levels of social progress. Nonetheless, the European model is followed to varying degrees throughout the world. In many developing countries, workers' compensation is little more than a paper program where the government works in concert with industry to minimize the provision and costs of benefits . The International Labor Organization (ILO) Convention 121 (Employment Injury Benefits Convention) is intended to ensure that the occupationally injured and diseased workers in member countries receive social security benefits that conform to the ILO's requirements for employment injury benefits and medical care and sickness benefits. To encourage countries to ratify the treaty, its requirements have a fairly low threshold. At present, only 24 of 183 member countries have ratified the treaty, including only about half of the European countries . Workers' compensation reform is not widely considered outside Europe and North America. A notable exception is New Zealand, which instituted a comprehensive accident insurance system in 1974. The New Zealand no-fault system provides for compensation for all victims of injury by accident, regardless of the cause of the accident, eliminating tort remedies for such injuries . Under this system, emphasis is placed on accident prevention and, when necessary, on the rehabilitation of injured persons. Tort litigation over accidents has been almost entirely eliminated by statute. Public hospitals provide medical treatment, and lump-sum awards may be granted for permanent impairment. The New Zealand system offers timely compensation to injured patients and shows evidence of effective complaint resolution and provider accountability . Reform Proposals in the United States The legislative activity leading to the passage of the Occupational Safety and Health Act (OSHAct) raised serious questions about the fairness and adequacy of the state workers' compensation programs. Congress found the system to be in disarray, with low benefits, inadequate coverage and medical care, poor or no rehabilitation, poor administration, and excessive litigation. While the primary purpose of the OSHAct was to ensure uniformity in the application of safety and health regulations, the Act also mandated the first steps toward nationwide reform of the compensation systems. Yet, after 40 years of experience with the OSHAct, there is virtually no federal influence over the state workers' compensation programs, despite the persistence of considerable variation in administration and benefits. The Federal Alternative Many reformers contend that the state workers' compensation system should be discontinued in favor of a national program with uniform coverage of health care and wage-loss benefits. There have been few calls to federalize the state workers' compensation systems in recent years. The public debate does not appear to be necessary. Most of the responsibility for compensating disabled workers already resides in the federal government, not in the state systems. The federal government not only pays for most workers' compensation benefits, it operates its own array of programs that have considerably more generous benefits than are offered by the state programs . Federal funding of workers' compensation is at least four times that of state programs (See Table ). The Social Security system is a major if not the primary source for insurance for workplace disabilities. In quiet pursuit of the German model of European workers' compensation, Congress expands its authority whenever pressed by worker groups that are not well served by the state programs. The federalization of workers' compensation has been slowly unfolding, with remarkably little discussion of the costs. The Longshore and Harbor Workers Compensation Act was enacted in 1927, followed by the Black Lung Benefits Act, the Radiation Exposure Compensation Act, the War Hazards Compensation Act, and the Railroad Retirement Act. These and many other compensation programs operate with permanent positions in the federal government. The Energy Employees Occupational Illness Compensation Program Act of 2000, the newest of these programs, provides compensation for employees of the Department of Energy, its predecessor agencies, and its contractors and subcontractors who become ill as a result of the work performed in the production and testing of nuclear weapons. The Department of Labor has paid more than $6 billion in compensation and medical benefits to more than 60,000 claimants in the past nine years in just this one program . FECA Reform Proposals The FECA program is not viewed by legislators as being in need of major reform. In that regard, its status is similar to that of European workers' compensation, undergoing revisions but not requiring major reforms. A number of reform proposals have been circulated by FECA administrators and Inspectors General . These experts readily admit that FECA has serious structural problems, that it creates disincentives to return to work, and that the basic rate of FECA compensation often is more than the employee's pre-injury take-home pay. One of the proposed reform measures calls for setting disability compensation at 70% for all claimants, rather than the current varied and higher rates allowed under FECA. Another proposed FECA reform concerns the equity issue inherent in the waiting-period provision. The original purpose of a brief waiting period before benefits were awarded was to discourage claims for minor injuries and illnesses. The most obvious problem in need of correction is that current law gives long-term FECA claimants over retirement age a choice between federal retirement system benefits and FECA benefits. Most claimants choose FECA benefits because they are more generous. A recovering FECA claimant who goes back to work risks finding that his or her retirement income will be less than if he or she had stayed on FECA benefits. Although no authority exists currently to reduce FECA benefits based on age, two types of changes have been proposed to reduce FECA benefits when employees reach an age when retirement normally occurs. One proposed change would convert FECA benefits to retirement benefits at retirement age. The Public Health Model In 2006, I proposed the Public Health Model as a major departure from the European models that have dominated U.S. workers' compensation. The Public Health Model would abolish the entire workers' compensation system , replacing it with a comprehensive disability insurance system for all injuries and illnesses. The objective is to insure everyone equitably and to abolish the government agencies that have failed to do this over the past century. Industry and labor would deal directly with government agencies to determine a national set of benefits for injured or ill workers, with uniform incentives to return to work. Wage replacement for workers ought to be provided for a period of time stipulated by government and consistent with other social programs. The Public Health Model stipulated the following objectives: • The current federal and state workers' compensation systems should be discontinued in favor of a national program with uniform coverage of health care and loss-of-earnings benefits. Workers in private employment should receive the same benefits as government workers. • Resumption of tort liability should end exclusive remedy and all other provisions of the various state workers' compensation programs. • The replacement will be a no-fault compensation system based on disability rather than cause, with an integrated approach to disability compensation such as exists in The Netherlands, where all employees are covered by a compulsory scheme that insures against loss of earnings resulting from long-term disability resulting from any injury or disease. • There should be a national disability program similar to that in New Zealand to provide compensation for all victims of injury by accident, regardless of the cause of the accident. Disability should be defined and benefits administered without the need for health care professionals. • Social Security (SSDI) disability benefits should be provided for all permanent injuries and illnesses. This uniform national coverage should provide an income at a level to support a dignified standard of living during disability. • Health care should be provided by a national health care system independent of industry involvement and insurance industry control. Workers should receive the same medical care under the same conditions as all other citizens. • Tort liability for negligence should be imposed on those who knowingly cause disability. There should be industry-wide shared liability for disability caused by or connected to industry, and society-wide shared liability for disability whose cause cannot be identified. Europeans see a considerable advantage in the direct financing of workers' compensation by employers. European employers are better organized and have more societal power than workers' organizations, yet are still perceived to be protective of worker interests. Consequently, Europeans hold that if all health care is financed by a public budget, there will be less political pressure from employers to keep the system efficient and there could be less motivation for prevention. Employers may put less effort in better working conditions in order to avoid costs when workers' compensation becomes a societal cost. Because of these concerns, the proposed reforms in the United States are quite unlike the established European models. The Public Health Model would impose tort liability for negligence on those who knowingly cause disability. There would be industry-wide shared liability for disability caused by or connected to industry, and society-wide shared liability for disability whose cause cannot be identified. The European view of the United States resuming tort liability is generally circumspect since workers are in a weaker legal and economic position than the employer. The U.S. experience is that workers are already in a weak position with workers' compensation and that tort liability will benefit workers. In Europe, there is political discussion of how to protect workers (whistle blowers) who report obvious bad practices by their employers. Resumption of tort liability in the United States would enhance the protections afforded whistle blowers. Abolishing the state workers' compensation system and the federal FECA program could result in significant cost savings. The present system incurs high overhead expenses providing benefits through programs burdened with governmental bureaucracy, shared administration by private insurance companies and employers, and litigation. The public health model would treat occupational diseases in the same way all other disease is treated, removing them from the workers' compensation arena where causation must be demonstrated, often leading to litigation. The public health model requires that we move beyond holding employers responsible for causing illness and injury through the direct costs of medical care and wage replacement. The public health model seeks to advance prevention of occupational injuries and illnesses through consultation provided by unbiased experts in health and safety. In doing so, the model adopts the European experience of physician consultation with industry . France, Belgium, and Germany employ physicians to conduct inspections of worksites and examinations of employees. These physician consultants provided by government are able to mandate employer-financed occupational health services in many of the larger plants . A complete hazard survey for every workplace in the country is conducted in Germany, followed by health examinations of the workers, and a plan for removal or control of hazards according to the severity of risk . Occupational health physicians working in corporations, as well as the companies that employ them, are protected from malpractice liability by workers' compensation law. The "exclusive-remedy" provision of the law is the quid pro quo under which the employer enjoys immunity from being sued by workers for failing to be responsible for worker health in exchange for accepting financial liability for the workers' injuries. This protection would end under the pubic health model.
Workers' compensation is a European concept, dating back to German Chancellor Otto von Bismarck. By the turn of the 20 th century, all European countries had workers' compensation laws. The German law required employees to pay part of the costs and called for highly centralized administration. Its coverage was broad, was compulsory, and provided for nonprofit mutual employers' insurance funds . Most industrialized nations now have national workers' compensation programs based on the German model . The British law embodied a different approach. It was elective, administration was left to the courts, and insurance was carried through private firms. The German system was closely linked to the rest of the social insurance system. It provided for accident prevention, medical treatment, and rehabilitation, whereas the British scheme did none of these things . The British system was troubled from the outset by disputes over which jobs and what industries were to be covered, resulting in the litigation that the system had been intended to replace.
In the United States, two separate and unequal workers' compensation systems, federal and state, function in complete separation. The federal system, under the Federal Employees' Compensation Act (FECA), was designed and operated following the German model. FECA covers 2.9 million federal employees in more than 70 different agencies along with a number of other worker groups adopted by Congress in various acts of expansion of the federal authority. FECA provides benefits without delay, and moves disabled workers to other government programs in a non-adversarial system. It provides social insurance that most European countries would recognize as equal to their own. The FECA program operates without competition. The Secretary of Labor has exclusive jurisdiction over the entire program, including the several appeal and review processes. The Department of Labor (DOL) has few constraints on what it charges the federal agencies, passing on its costs plus, in some cases, a fee based on a pro rata share of administrative costs. Most federal agencies include workers' compensation costs in their annual appropriation requests to Congress, which makes the costs difficult to discern . The United States, late in accepting workers' compensation, allowed the individual states to develop separate and unequal programs. The state laws were influenced much more by the English system than the German one. Like the original English Poor Law, prevention of poverty, not prevention of disability and its social management, was the driving concern for the development of the state workers' compensation programs . The medical profession was a major opponent of compulsory social health insurance. The resulting fragmented workers' compensation arrangements now offered to most workers in state plans bear little resemblance to the federal program. The state workers' compensation system has been confrontational with workers throughout its history, with benefits that are far from adequate. State programs pay for less than one third of the total costs of occupational injuries and illnesses, shifting most of these costs to the workers, their families, private medical insurance, and Medicare and Medicaid . Ten times as many severely disabled occupational disease victims receive Social Security Disability Insurance (SSDI) or early retirement benefits as receive workers' compensation benefits . Although the state programs may appear to approximate the FECA program, there are major deficiencies in coverage and benefits they offer. Occupational injuries and illnesses, if accurately reported, would be among the five leading causes of morbidity and mortality in the United States . These injuries and diseases, along with their fatalities, result in costs that are well over three times the published expenditures, and roughly 3% of the U.S. gross domestic product (GDP) . Most of the costs of occupational disease are not covered by workers' compensation . Only about one in twenty severely disabled occupational disease victims receives workers' compensation benefits. For occupational cancers, it is fewer than one in a hundred.
In Europe, social democracies--a mixed system that balances market forces with government assistance--have flourished since World War II. While the United States devotes only 11 percent of its GDP to redistributing income by way of social benefits, the countries of the European Union contribute more than 26 percent . In Europe, there is a tradition of strong labor political organizations. With the EU's large support bases for workers' compensation, reforms are not urgent necessities there as they are elsewhere. Nonetheless, significant variations exist among EU countries. Social protection represents more than 30% of the GDP in Sweden, Denmark, France, and Germany, yet less than 15% in the Baltic countries. The United Kingdom spends a much smaller proportion of its GDP on occupational injury and disease benefits than other European countries . The new Conservative government is instituting a program of reassessing individuals receiving disability benefits. Three fourths of those evaluated in the early stages of the program have been found fit for work under new evaluation guidelines . The basic criteria of the workers' compensation systems in European countries are similar. There are two distinct approaches. The first is that of the German system, with self-governed insurance associations funded by employers' contributions providing a comprehensive prevention, rehabilitation, and compensation service. In the second approach, the state administers the system for compensating occupational injuries and disease as part of its wider provision for social security and collects from employers the sums necessary to finance it. Many European countries now have mixtures of the state and private insurance systems . While fewer than half of American workers are covered by short-term disability insurance, all workers in EU countries are covered against the risk of wage loss due to temporary sickness through government agencies. Wage-replacement schemes consist of social insurance covering the loss of earnings due to old age, unemployment, temporary sickness, or permanent disability. In all of the EU countries except The Netherlands, disability social security schemes are separated from compensation for occupational injuries. Coverage typically lasts up to a year, with transition to the longer-term disability insurance programs if needed. In The Netherlands, partially disabled unemployed workers are given the same benefits as totally disabled workers . This offers considerable advantage over the failed attempts in the United States to deal with partial disability. The level of compensation has a profound influence on utilization. In The Netherlands, the rate of disability in the working-age population is close to 9%, compared with an average of 6% in other European countries. Forty percent of persons with disabilities become long-term unemployed . In response, comprehensive reforms have increased employer responsibilities over the past decade, and now provide a more limited benefit package. Employers are provided incentives to recruit disabled workers in order to reintegrate them into the work setting . Emphasis is placed on returning workers with injury and illness to acceptable jobs, and on improving the work environment to prevent recurrence. Germany provides an example of the legally mandated role of the insurance organizations to provide specific initiatives on rehabilitation, prevention, regulation, and regulatory inspection. It is compulsory for employers to adapt the working conditions and/or to find a new work activity in the same company . European countries have adopted lists of occupational diseases that are typically appended to regulatory provisions, thus ensuring the responsibility of the state. This is a concept that has never taken hold in the United States and is unlikely to be adopted. The roles of the occupational disease lists in determining compensation vary significantly. With the exception of Sweden, all EU countries have a mix between a "closed" system with a list including a certain number of diseases, and an "open" system. In Sweden there is an open system in which each claim for benefits is treated on its own merits, where all illnesses that could possibly arise from workplace exposures are considered. France has a more pragmatic closed system and lists 112 occupational diseases that require specific symptoms or pathological lesions to be present, from work that is known to cause the disease, and also specifies the time limits for compensation claims. In other EU countries, the interpretation and the use of lists are within systems that contain elements of both. The cost of compensating occupational diseases accounts for the majority of the total costs of compensation in European countries . Workers' compensation systems in Europe continue to rely on physicians and other experts to determine who receives benefits . Determining the causes of occupational diseases involves a review of epidemiological and other scientific and medical evidence, and often the agreement of expert consultants as to the increased risks resulting from the occupational exposures. Significant differences exist in the established and applied diagnostic and exposure criteria in the EU countries . There are differences concerning the extents to which claimants must show evidence of work exposure leading to disease. In Belgium, Italy, and Luxembourg, there is a presumption of cause. It is sufficient for victims to demonstrate that they are suffering from listed diseases and that they have been exposed to corresponding risks or that they have done jobs specified by the lists. Similarly, in France, the list of occupational diseases is considered to be a presumption of cause. In Austria, Denmark, Finland, Germany, Switzerland, Portugal, Spain, and Switzerland, the lists serve merely as guides to insurance organizations investigating the claims. Insurers will seek to establish whether a disease could have been caused by an agent on the national list while at the same time searching to find whether there are non-occupational factors that could have caused it .
Reforms of workers' compensation systems are being considered in various European countries. The European Commission's Community Strategy on Health and Safety at Work proposes to significantly reduce the incidence rates of occupational injuries and illnesses by 2012. To achieve this, EU member countries must implement health and safety regulations in national legislation. Enforcement is essential, especially in small and medium-sized enterprises, where member countries must take direct measures to ensure compliance with legislation, such as inspection and the issuing of penalties . Trade unions direct much of the discussion of reform, alleging that most occupational diseases are still ignored by the compensation systems, with under-reporting, inadequate monitoring, and a resultant lack of compensation. "Under-recognition of occupational diseases is common to all EU countries. Its most immediate consequence is a wholesale transfer of resources to the employer's benefit, with much of the cost burden being shared between victims (loss of pay as a result of reassignment or firing), and general health budgets (social security coverage of diseases, disability, and unemployment, national health system, etc.)" . European countries share the concern that exclusion of people with health problems or disabilities from the labor market contributes to an increasing dependence on health-related benefits. This in turn puts pressure on the larger social protection system . Even though EU member countries have developed many of the most successful workers' compensation programs in the world, the trade unions and other groups propose that substantial reforms are needed. They emphasize that • All occupational diseases should be recognized and compensated as reliably as are occupational injuries. Procedural reform is a clear priority throughout Europe to remedy the inefficiency of the mixed system. Changes to the criteria that define occupational diseases indirectly challenge the causal presumption and should be removed from the existing schedules. Revision of schedules and recognition of new occupational diseases should be enhanced. There must be a shift in the onus of proof in light of epidemiological evidence that some occupations involve major risks of exposures to certain hazards and diseases. • Prevention should be emphasized for chronic diseases with delayed onsets, and diseases with long latency periods such as occupational cancers. Health surveillance must be more than just medical surveillance. It must embrace surveillance of risks and exposure, the latter being particularly critical for the recognition of occupational cancers and other long-latency diseases. • Trade union participation should be expanded so that trade unions are informed about companies with unacceptable health and safety records. In Spain, the campaigns of trade unions have led to the establishment of regional safety representatives and labor inspectors, bringing about needed improvements in occupational health and safety. • Perceived inadequacies in levels of compensation available through social insurance combined with perceptions of injustice over employer immunity from redress under the civil law are leading to some reorientation of national "no fault" compensation systems towards a closer fit with civil law models. • Criminal lawsuits should be reinstated where they would have salutary effects on occupational health and safety. Italy provides an example of a country where the national legal service is actively involved in identifying the occupational origins of diseases. Law officers specialized in workplace health issues work in conjunction with trade unions and victim support groups. • The linking of occupational health and safety with public health should be enhanced to focus public health action on reducing social health inequities caused by working conditions. A public health approach to workplace health requires both political and legal changes and an extension of the spheres of preventive activities. It raises the issue of social control of the conditions of production to be consistent with both human and environmental health. It brings the concept of sustainability into the evaluation of working conditions more than ever before . • Compensation systems should be integrated into a global health and safety strategy. The rapid rise in occupational injuries and diseases in developing countries cannot be ignored. The export of high-risk activities to countries in Asia, Africa, and Latin America is unacceptable. Global regulation is not nearly sufficiently developed. Many European reforms are moving in the direction of economic incentives that reward organizations that develop and maintain safe and healthful working environments. "European workers'compensation systems always provide a combination of pure insurance functions and government regulation. The weak point of most of the compensation systems is, however, a lack of simple correlation between preventive activities and financial benefits" . There are important differences in the institutional assets, in the compensation of occupational diseases, and in the kinds of incentives used in different countries regardless of the social insurance system. The fundamental difference between countries is whether the workers' compensation system is based on a competitive market between private insurance companies or on a monopoly structure, where employers cannot choose among insurance providers, Of the 27 EU countries, 19 have monopoly systems. Subsidy systems, tax incentives, and insurance-based "experience rating" are theoretically possible in all EU countries . The European Agency for Safety and Health at Work (EU-OSHA) concludes that "In competitive insurance markets, effort-based incentives are more difficult to achieve. A possible solution could be the introduction of long-term contracts or the creation of a common prevention fund, financed equally by all insurers" . Few countries outside Europe can attain such levels of social progress. Nonetheless, the European model is followed to varying degrees throughout the world. In many developing countries, workers' compensation is little more than a paper program where the government works in concert with industry to minimize the provision and costs of benefits . The International Labor Organization (ILO) Convention 121 (Employment Injury Benefits Convention) is intended to ensure that the occupationally injured and diseased workers in member countries receive social security benefits that conform to the ILO's requirements for employment injury benefits and medical care and sickness benefits. To encourage countries to ratify the treaty, its requirements have a fairly low threshold. At present, only 24 of 183 member countries have ratified the treaty, including only about half of the European countries . Workers' compensation reform is not widely considered outside Europe and North America. A notable exception is New Zealand, which instituted a comprehensive accident insurance system in 1974. The New Zealand no-fault system provides for compensation for all victims of injury by accident, regardless of the cause of the accident, eliminating tort remedies for such injuries . Under this system, emphasis is placed on accident prevention and, when necessary, on the rehabilitation of injured persons. Tort litigation over accidents has been almost entirely eliminated by statute. Public hospitals provide medical treatment, and lump-sum awards may be granted for permanent impairment. The New Zealand system offers timely compensation to injured patients and shows evidence of effective complaint resolution and provider accountability .
The legislative activity leading to the passage of the Occupational Safety and Health Act (OSHAct) raised serious questions about the fairness and adequacy of the state workers' compensation programs. Congress found the system to be in disarray, with low benefits, inadequate coverage and medical care, poor or no rehabilitation, poor administration, and excessive litigation. While the primary purpose of the OSHAct was to ensure uniformity in the application of safety and health regulations, the Act also mandated the first steps toward nationwide reform of the compensation systems. Yet, after 40 years of experience with the OSHAct, there is virtually no federal influence over the state workers' compensation programs, despite the persistence of considerable variation in administration and benefits.
Many reformers contend that the state workers' compensation system should be discontinued in favor of a national program with uniform coverage of health care and wage-loss benefits. There have been few calls to federalize the state workers' compensation systems in recent years. The public debate does not appear to be necessary. Most of the responsibility for compensating disabled workers already resides in the federal government, not in the state systems. The federal government not only pays for most workers' compensation benefits, it operates its own array of programs that have considerably more generous benefits than are offered by the state programs . Federal funding of workers' compensation is at least four times that of state programs (See Table ). The Social Security system is a major if not the primary source for insurance for workplace disabilities. In quiet pursuit of the German model of European workers' compensation, Congress expands its authority whenever pressed by worker groups that are not well served by the state programs. The federalization of workers' compensation has been slowly unfolding, with remarkably little discussion of the costs. The Longshore and Harbor Workers Compensation Act was enacted in 1927, followed by the Black Lung Benefits Act, the Radiation Exposure Compensation Act, the War Hazards Compensation Act, and the Railroad Retirement Act. These and many other compensation programs operate with permanent positions in the federal government. The Energy Employees Occupational Illness Compensation Program Act of 2000, the newest of these programs, provides compensation for employees of the Department of Energy, its predecessor agencies, and its contractors and subcontractors who become ill as a result of the work performed in the production and testing of nuclear weapons. The Department of Labor has paid more than $6 billion in compensation and medical benefits to more than 60,000 claimants in the past nine years in just this one program .
The FECA program is not viewed by legislators as being in need of major reform. In that regard, its status is similar to that of European workers' compensation, undergoing revisions but not requiring major reforms. A number of reform proposals have been circulated by FECA administrators and Inspectors General . These experts readily admit that FECA has serious structural problems, that it creates disincentives to return to work, and that the basic rate of FECA compensation often is more than the employee's pre-injury take-home pay. One of the proposed reform measures calls for setting disability compensation at 70% for all claimants, rather than the current varied and higher rates allowed under FECA. Another proposed FECA reform concerns the equity issue inherent in the waiting-period provision. The original purpose of a brief waiting period before benefits were awarded was to discourage claims for minor injuries and illnesses. The most obvious problem in need of correction is that current law gives long-term FECA claimants over retirement age a choice between federal retirement system benefits and FECA benefits. Most claimants choose FECA benefits because they are more generous. A recovering FECA claimant who goes back to work risks finding that his or her retirement income will be less than if he or she had stayed on FECA benefits. Although no authority exists currently to reduce FECA benefits based on age, two types of changes have been proposed to reduce FECA benefits when employees reach an age when retirement normally occurs. One proposed change would convert FECA benefits to retirement benefits at retirement age.
In 2006, I proposed the Public Health Model as a major departure from the European models that have dominated U.S. workers' compensation. The Public Health Model would abolish the entire workers' compensation system , replacing it with a comprehensive disability insurance system for all injuries and illnesses. The objective is to insure everyone equitably and to abolish the government agencies that have failed to do this over the past century. Industry and labor would deal directly with government agencies to determine a national set of benefits for injured or ill workers, with uniform incentives to return to work. Wage replacement for workers ought to be provided for a period of time stipulated by government and consistent with other social programs. The Public Health Model stipulated the following objectives: • The current federal and state workers' compensation systems should be discontinued in favor of a national program with uniform coverage of health care and loss-of-earnings benefits. Workers in private employment should receive the same benefits as government workers. • Resumption of tort liability should end exclusive remedy and all other provisions of the various state workers' compensation programs. • The replacement will be a no-fault compensation system based on disability rather than cause, with an integrated approach to disability compensation such as exists in The Netherlands, where all employees are covered by a compulsory scheme that insures against loss of earnings resulting from long-term disability resulting from any injury or disease. • There should be a national disability program similar to that in New Zealand to provide compensation for all victims of injury by accident, regardless of the cause of the accident. Disability should be defined and benefits administered without the need for health care professionals. • Social Security (SSDI) disability benefits should be provided for all permanent injuries and illnesses. This uniform national coverage should provide an income at a level to support a dignified standard of living during disability. • Health care should be provided by a national health care system independent of industry involvement and insurance industry control. Workers should receive the same medical care under the same conditions as all other citizens. • Tort liability for negligence should be imposed on those who knowingly cause disability. There should be industry-wide shared liability for disability caused by or connected to industry, and society-wide shared liability for disability whose cause cannot be identified. Europeans see a considerable advantage in the direct financing of workers' compensation by employers. European employers are better organized and have more societal power than workers' organizations, yet are still perceived to be protective of worker interests. Consequently, Europeans hold that if all health care is financed by a public budget, there will be less political pressure from employers to keep the system efficient and there could be less motivation for prevention. Employers may put less effort in better working conditions in order to avoid costs when workers' compensation becomes a societal cost. Because of these concerns, the proposed reforms in the United States are quite unlike the established European models. The Public Health Model would impose tort liability for negligence on those who knowingly cause disability. There would be industry-wide shared liability for disability caused by or connected to industry, and society-wide shared liability for disability whose cause cannot be identified. The European view of the United States resuming tort liability is generally circumspect since workers are in a weaker legal and economic position than the employer. The U.S. experience is that workers are already in a weak position with workers' compensation and that tort liability will benefit workers. In Europe, there is political discussion of how to protect workers (whistle blowers) who report obvious bad practices by their employers. Resumption of tort liability in the United States would enhance the protections afforded whistle blowers. Abolishing the state workers' compensation system and the federal FECA program could result in significant cost savings. The present system incurs high overhead expenses providing benefits through programs burdened with governmental bureaucracy, shared administration by private insurance companies and employers, and litigation. The public health model would treat occupational diseases in the same way all other disease is treated, removing them from the workers' compensation arena where causation must be demonstrated, often leading to litigation. The public health model requires that we move beyond holding employers responsible for causing illness and injury through the direct costs of medical care and wage replacement. The public health model seeks to advance prevention of occupational injuries and illnesses through consultation provided by unbiased experts in health and safety. In doing so, the model adopts the European experience of physician consultation with industry . France, Belgium, and Germany employ physicians to conduct inspections of worksites and examinations of employees. These physician consultants provided by government are able to mandate employer-financed occupational health services in many of the larger plants . A complete hazard survey for every workplace in the country is conducted in Germany, followed by health examinations of the workers, and a plan for removal or control of hazards according to the severity of risk . Occupational health physicians working in corporations, as well as the companies that employ them, are protected from malpractice liability by workers' compensation law. The "exclusive-remedy" provision of the law is the quid pro quo under which the employer enjoys immunity from being sued by workers for failing to be responsible for worker health in exchange for accepting financial liability for the workers' injuries. This protection would end under the pubic health model.
Workers' compensation law places the occupational physician in a critically important role. The physician must determine that an injury or illness is caused by work, diagnose the problem, prescribe care, and assess the extent of impairment and the ability of the worker to resume work. The common assumption is that physicians can adequately assess the extent of disability that results from occupational injury or illness. This is true only for the minor injuries that have virtually no cost impact on the workers' compensation system. When the injury is more severe, the physician's estimate of the extent of disability is far from satisfactory. Deborah Stone has pointed out that "physicians have no particular skill, training, background, or information to perform the task better than many other individuals. The failure of the physician to provide a reliable service to the worker under these circumstances results in a constant need for dispute resolution through the judicial system" . Moreover, the physician's success in returning workers expeditiously to work diminishes rapidly with the increasing severity of the injury or illness. In the public health model physicians would no longer act as gatekeepers for compensation benefits. The public health model eliminates the physician from any role other than that of privately consulting with the patient and offering advice solely to the patient. Instead, health and safety professionals would work primarily in public health agencies, enhancing the physicians' ability to represent the workers, and to approach the work setting not as employees but rather as advocates for health and safety in the workplace. If companies ignored the recommendations, regulatory agencies would intercede with appropriate enforcement. In the event that problems persisted through lack of industry compliance, the companies would be subject to litigation. Employers, without the protection of exclusive remedy, will be legally liable for their disregard of occupational health and safety. With the public health model, the costs associated consultations and prevention ought to be far less substantial than those inherent in the current system. In many European countries, occupational medicine specialists intervene at two levels: as labor inspectors (provided by government and/or social security) and as members of the company preventive services (at one company, or in services covering many different companies). The latter are paid by employers. Inspection is carried out by State labor inspectors; risk assessment and health surveillance are carried out by company (or inter-company) physicians. A complete hazard survey for every workplace is compulsory in all the EU countries. The public health model may now be feasible as an addition to the expanded healthcare coverage afforded by the legislation passed in 2010. A historic opportunity has been created for free choice of physicians by all injured and ill workers. Workers would receive the same health care any citizen would receive for similar injuries and illnesses, making the current system of workers' compensation health care redundant. Universal coverage can reduce total spending by eliminating the high administrative costs that are now necessary to determine eligibility for coverage . European systems interpret lists of occupational diseases and compensate their victims appropriately. The United States, and a few European countries such as Finland, use a general clause or system of proof instead of a list. This requires that for a disease to be recognized as an occupational disease, a causal link to work must be proven. Occupational diseases affect 15-20% of Americans . Conservative estimates are that 6-10% of cancers, and 5-10% of myocardial infarctions, strokes, and transient ischemia are caused by workplace factors. Occupational neurological, psychological, renal, and many other diseases are increasingly recognized . Occupational diseases should be covered by workers' compensation, but their costs are largely evaded by state agencies and private insurers by amending state laws or by cost shifting. The costs of fully compensating a significant portion of heart disease, stroke, and cancer cases alone would be far beyond the current scope of workers' compensation insurance coverage. The eventual cost of an ever-expanding recognition of occupational diseases will necessitate the transfer of this burden to the mainstream of medical care where the determination of causation will no longer be necessary to treatment. The determination of disability is increasingly viewed as a lucrative medical business. Occupational physicians have a political agenda to influence insurers to favor their opinions over those of personal physicians . Some states now require proof of the physician's expertise before testimony can be admitted in court cases. Workers' compensation medical care is much more expensive than other medical care . Medical payments increased by 8.8% in 2008, to $29.1 billion, now for the first time accounting for over half of all workers' compensation benefits . It is argued that workers' compensation medical care is delivered by physicians who provide expensive medical treatment to accelerate recovery and return to work, and that moreover, these physicians often provide information that determines income benefits, including whether an injury is compensable when a worker is ready to return to work, and assessments of permanent impairment . These explanations of the extraordinary costs are not well supported, and typically come from the professional association that represents the business interests of its members. A recent survey discovered that a small group of physicians have a disproportionate effect on workers' compensation claims. These cost-intensive physicians made up 3.8% of physicians treating workers' compensation cases, but accounted for 72% of costs. They treated 16 times more claimants, and their average claim cost was four times higher than that of other physicians ($46,239 vs $11,390) . Despite the increased costs, medical care provided through workers' compensation leads to poor medical results after surgery . In the public health model, occupational physicians would not be able to exclude other physicians from providing care to their private patients. APHA Policy Statement The American Public Health Association (APHA) reviewed the proposals for reform of workers' compensation, and in 2009 gave its support to many of the elements of the Public Health Model in a Policy Statement. The APHA called for increased research on work-related illness and reporting methods. A national database would lay the groundwork for research into the causes and consequences of occupational illnesses, and lead to improved diagnosis, treatment, prognosis, and ultimately, prevention of occupational diseases. There should be a comprehensive and universal reporting system for all occupational injuries and illnesses . The APHA Policy Statement outlined the following objectives: • The workers' compensation system should put prevention of injury and illness, and rehabilitation of those unable to return to work after injury and illness as its foremost goals. • The current fragmented workers' compensation system should be replaced by a national program with uniform coverage of health care and adequate loss-of-earnings benefits for all occupational injuries and illnesses. • The system should be a more comprehensive, no-fault compensation system based on disability, not impairment, such as exists in The Netherlands, where all employees are covered by a compulsory government administered plan that insures against loss of earnings from long-term disability resulting from any occupational injury or disease. • The system should include a national standard of coverage for all workers, including all federal and state government workers. Individual state exemptions for seasonal agricultural workers, home care workers, domestic workers, part-time workers, contractors, immigrant workers, employees of small companies and all other special categories should be removed. • The system should be integrated in a seamless manner with the Social Security disability program (SSDI); benefits should be provided for all permanent injuries and illnesses. • Health care for injured workers should be provided by a national health care system independent of industry involvement and insurance industry control; health care providers should be removed from the responsibility of determining eligibility for benefits. • The system must have mandatory root cause investigation requirements for all occupational injuries and illnesses. • The system must have money set aside for: training of occupational health and safety professionals; preventive initiatives based on root injury and illness analyses; worker health and safety training; and mandatory reporting by health professionals • The system should provide assistance, incentives and training in job modification and appropriate return to work. • Where appropriate, tort and criminal liability for negligence should be permitted for those who knowingly or recklessly cause disability. • There should be a national medical and statistical database on worker injuries, worker illnesses, worker toxic exposures and resultant diseases.
The American Public Health Association (APHA) reviewed the proposals for reform of workers' compensation, and in 2009 gave its support to many of the elements of the Public Health Model in a Policy Statement. The APHA called for increased research on work-related illness and reporting methods. A national database would lay the groundwork for research into the causes and consequences of occupational illnesses, and lead to improved diagnosis, treatment, prognosis, and ultimately, prevention of occupational diseases. There should be a comprehensive and universal reporting system for all occupational injuries and illnesses . The APHA Policy Statement outlined the following objectives: • The workers' compensation system should put prevention of injury and illness, and rehabilitation of those unable to return to work after injury and illness as its foremost goals. • The current fragmented workers' compensation system should be replaced by a national program with uniform coverage of health care and adequate loss-of-earnings benefits for all occupational injuries and illnesses. • The system should be a more comprehensive, no-fault compensation system based on disability, not impairment, such as exists in The Netherlands, where all employees are covered by a compulsory government administered plan that insures against loss of earnings from long-term disability resulting from any occupational injury or disease. • The system should include a national standard of coverage for all workers, including all federal and state government workers. Individual state exemptions for seasonal agricultural workers, home care workers, domestic workers, part-time workers, contractors, immigrant workers, employees of small companies and all other special categories should be removed. • The system should be integrated in a seamless manner with the Social Security disability program (SSDI); benefits should be provided for all permanent injuries and illnesses. • Health care for injured workers should be provided by a national health care system independent of industry involvement and insurance industry control; health care providers should be removed from the responsibility of determining eligibility for benefits. • The system must have mandatory root cause investigation requirements for all occupational injuries and illnesses. • The system must have money set aside for: training of occupational health and safety professionals; preventive initiatives based on root injury and illness analyses; worker health and safety training; and mandatory reporting by health professionals • The system should provide assistance, incentives and training in job modification and appropriate return to work. • Where appropriate, tort and criminal liability for negligence should be permitted for those who knowingly or recklessly cause disability. • There should be a national medical and statistical database on worker injuries, worker illnesses, worker toxic exposures and resultant diseases.
European workers' compensation systems offer a number of important examples for redesign of workers' compensation in the United States. These examples are most useful to the FECA program for federal employees. The EU's far more generous support for workers' compensation results in many redesign initiatives that are not easily translated to the more serious need for reform that exists in the United States at the state level. The Public Health Model proposed in 2006 would abolish the workers' compensation system, and in its place adopt a national disability insurance system for all injuries and illnesses. Although a marked departure from European workers' compensation, the Public Health Model would embrace the European success with physician consultation with industry and labor.
DOL: US Department of Labor; EU: European Union; FECA: Federal Employees' Compensation Act; SSDI: Social Security Disability Insurance.
The author has no conflict of interests. He has not accepted payment from any organization or institution. The work is not supported by any grant or institution.
JL is the sole author of the manuscript.
|
Diagnostic Pathways of | c3e872d4-7ca3-40d6-835c-d42285a32338 | 11435010 | Anatomy[mh] | Leptospirosis, caused by bacteria of the genus Leptospira , is a zoonotic disease; all species of homeothermic animals and humans are receptive to leptospiric infection. The receptivity of heterothermic animals is questionable, but positive serological reactions have also been found. Of domestic mammals, the most receptive ones are pigs, bulls, and dogs. The infection pathways are multiple. Of these, the direct path seems to be the most common—humans and animals may become infected through direct contact with the urine, blood, or tissue of infected animals. Indirect transmission is also common, through contact with water, soil, or food that has been contaminated with the urine of infected animals. Leptospira can also penetrate through wounds or erosions of the skin and mucous membranes, sometimes even through intact skin; cases of transmission of the disease through sexual contact have also been reported. Leptospira transmission is facilitated by hot and humid weather, when germs find better survival conditions in the environment and when humans and dogs have increased contact with infected waters [ , , ]. Puppies manifest clinical infection more often, because they are more susceptible, whereas subclinical infections are more frequent in adults or older dogs. Diseased animals eliminate Leptospira for several days through various secretions and excretions. After passing through the disease course, they eliminate Leptospira through the urine, thus contaminating the environment. Clinically, leptospirosis displays a wide range of symptoms, among which acute renal failure is one of the predominant and most severe aspects. Assessing dogs for leptospirosis infection can be carried out through a variety of methods. Bacterial culture is the standard method in retrospective immunohistochemistry for Leptospira detection in dogs with kidney injury. Obtaining a Leptospira culture is technically difficult, and the administration of antibiotics may affect the growth of bacteria, which can lead to a false-negative diagnosis. The immunofluorescence assay is useful for the identification of Leptospira , which is commonly identified in urine and blood, as well as tissue. Because subclinical shedding has been documented in shelter dogs , leptospirosis may be more common than the number of diagnoses would suggest. Serologic examination may be used to detect subclinical cases. A safe method of identifying Leptospira is is immunohistochemistry [ , , , , , , ]. A 2007 study conducted in several states across the USA found that about 17–25% of the examined dogs had no clinical signs of anti-Leptospira antibodies against one or more serotypes found in dogs. This shows that there was exposure to Leptospira [ , , , , ]. The increased incidence of clinical leptospirosis in dogs and serological data suggests that subclinical infection is associated with chronic kidney injury [ , , ]. Initially, the clinical manifestation of leptospirosis in dogs is kidney injury (tubulointerstitial nephritis). The progression of tubulointerstitial nephritis to renal fibrosis has been described in dogs , which becomes a cause of mortality in older dogs . The identification and subsequent detection of the lipL32 gene hold significant value in the context of epidemiological surveillance to ascertain the prevalence of pathogenic Leptospira spp. . The lipL32 gene exhibits high conservation among pathogenic Leptospira spp. . This gene encodes a surface-exposed lipoprotein anchored to the bacterial cell membrane, and it serves a pivotal function in mediating the interactions between the bacterium and its host. Notably, this gene is exclusively present in pathogenic Leptospira species , being frequently employed as the focal point in polymerase chain reaction (PCR) assays devised to detect Leptospira DNA within clinical specimens . The abundance of the lipL32 gene in Leptospira bacteria makes it exceptionally useful as a target in PCR-based detection methodologies, thereby markedly augmenting the sensitivity of the diagnostic test . Even minute quantities of Leptospira cells within clinical samples can be detected through the amplification of this gene . The aim of the study is to compare immunohistochemistry (IHC) and molecular biology (qPCR) diagnosis methods in a retrospective analysis of leptospirosis infection in follow-up dog samples. The epidemiological information has been corroborated with results obtained by applying IHC and qPCR methods in dogs diagnosed with kidney injury.
2.1. Case Selection and Criteria for Inclusion in the Study The study was conducted on 65 dogs with kidney injury of any type recorded between September 2016 and May 2023, both in the university clinic and in cases from private clinics from Western Romania. The cases were selected in chronological order and consisted of biopsy samples and autopsies performed between 2016 and 2023. The total number of 65 dogs included in the study belonged to several breeds, as shown in the case distribution table below ( ). Retrospectively, as a routine practice, the clinical assessment focused on the patient’s history (including vaccination status) and clinical signs. Laboratory tests that were commonly performed included a complete blood count (looking for leukocytosis, thrombocytopenia, and anemia), serum biochemistry (evaluating liver enzymes such as ALT, AST, and ALP, azotemia indicated by elevated creatinine, hyperbilirubinemia, and electrolyte imbalances), urinalysis (assessing urine specific gravity and sediment), and serological tests (measuring antibody titers for leptospirosis). Following the postmortem examinations performed, inflammatory processes, degenerative processes, and neoplasms were found and grouped into four categories: category 1—glomerulonephritis as a predominant lesion, category 2—chronic interstitial nephritis, category 3—acute interstitial nephritis, and 4—other lesions (including neoplasia, amyloidosis, congestion, bleeding, etc.). 2.2. Post Mortem Detection of Leptospirosis by Immunohistochemical Examination In performing the IHC, several steps were performed: Sample Preparation: No IHC examinations were performed on samples from the dogs that had received antibiotic therapy. The tissue samples were fixed in formalin and embedded in paraffin. The paraffin block was sectioned in 4–5 μm layers and immersed in a distilled water bath at 40 °C. The obtained sections were laid on glass slides and dried before immunostaining. Tissue sections were deparaffinized using xylene and rehydrated by successive immersion in ethanol solutions of decreasing concentrations for five minutes (from 100 to 95, 70, and 50%). Antigen requires blocking endogenous peroxidase activity by incubating the sample in a 3% hydrogen peroxide–methanol solution for ten minutes at 24 °C and washing with phosphate-buffered saline (PBS). Then, the antigen that was made with citrate buffer (pH 6.0) was incubated, according to the manufacturer’s instructions. Blocking non-specific binding with bovine serum albumin: 10% PBS was used to prevent the non-specific binding of antibodies. Tissue sections were incubated with the primary antibody ( Leptospira interrogans, Dilution Ratio: 1/50 ), specific for Leptospira antigens and incubated in a humid chamber at 24 °C for one hour. Secondary antibody incubation (Dilution Ratio: 1/500 ): After it was washed off twice, the primary antibody was incubated for 30 min with diluted biotinylated secondary antibodies. The secondary antibody binds to the primary antibody. Detection: The secondary antibodies were nonbiotinylated, and then the enzyme attached to avidin was incubated with the substrate to generate a precipitate visualized under optical microscopy. Counterstaining and mounting: Tissue sections were counterstained with hematoxylin to highlight cellular structures. The slides were mounted with a cover slip for examination under a microscope for the interpretation and quantification of antigen expression in the investigated tissue. 2.3. Post Mortem Confirmation of Leptospirosis by PCR To confirm the clinical evaluation results, the SYBR Green qPCR assay was employed as suggested by Sripattanakul et al., 2022 . It targeted lipL32 , a gene found only in pathogenic Leptospira spp., including L. canicola . All samples were analyzed in triplicate by PCR amplification of the lipL32 gene (used for the detection of pathogenic Leptospira strains), along with a negative template control. Total genomic DNA was isolated in 65 biological samples from 50 mg of kidney tissue by using the NucleoSpin ® DNA Clean-Up XS kit (Machery-Nagel, Düren, Germany), according to the manufacturer’s protocol. The quality and quantity of isolated DNA were assessed by the spectrophotometric method using a NanoDrop8000 Instrument (Thermo Fisher Scientific, Waltham, MA, USA). An amount of 100 ng of DNA was used as a template in PCR reactions and amplified using the primers described in the literature : Lep F GGCGGCGCGTCTTAAACATG and Lep R TCCCCCCATTGAGCAAGATT, using the GoTaq ® qPCR Master Mix (Promega, Madison, WI, USA) with an ABI 7500 Real Time PCR System (Applied Biosystems, Waltham, MA, USA) in an experiment of present/absent type. The polymerase was activated at 95 °C for 2 min, followed by 45 cycles of denaturation at 95 °C for 15 s and annealing/extension at 60 °C. The melting curve was analyzed at 60–95 °C, 0.5 °C increments at 5 s/step. A cycle threshold (Ct) under the value of 38 was considered a positive result for pathogenic Leptospira spp. 2.4. Statistical Analysis The statistical test used for interval or continuous variables was the Kruskal–Wallis test (non-parametric). Frequencies were analyzed using Pearson’s chi-squared test. These statistical analyses were performed using SPSS Statistics for Windows, Version 17.0 (SPSS Inc., Chicago, IL, USA). Statistical significance was considered at p values of <0.05.
The study was conducted on 65 dogs with kidney injury of any type recorded between September 2016 and May 2023, both in the university clinic and in cases from private clinics from Western Romania. The cases were selected in chronological order and consisted of biopsy samples and autopsies performed between 2016 and 2023. The total number of 65 dogs included in the study belonged to several breeds, as shown in the case distribution table below ( ). Retrospectively, as a routine practice, the clinical assessment focused on the patient’s history (including vaccination status) and clinical signs. Laboratory tests that were commonly performed included a complete blood count (looking for leukocytosis, thrombocytopenia, and anemia), serum biochemistry (evaluating liver enzymes such as ALT, AST, and ALP, azotemia indicated by elevated creatinine, hyperbilirubinemia, and electrolyte imbalances), urinalysis (assessing urine specific gravity and sediment), and serological tests (measuring antibody titers for leptospirosis). Following the postmortem examinations performed, inflammatory processes, degenerative processes, and neoplasms were found and grouped into four categories: category 1—glomerulonephritis as a predominant lesion, category 2—chronic interstitial nephritis, category 3—acute interstitial nephritis, and 4—other lesions (including neoplasia, amyloidosis, congestion, bleeding, etc.).
In performing the IHC, several steps were performed: Sample Preparation: No IHC examinations were performed on samples from the dogs that had received antibiotic therapy. The tissue samples were fixed in formalin and embedded in paraffin. The paraffin block was sectioned in 4–5 μm layers and immersed in a distilled water bath at 40 °C. The obtained sections were laid on glass slides and dried before immunostaining. Tissue sections were deparaffinized using xylene and rehydrated by successive immersion in ethanol solutions of decreasing concentrations for five minutes (from 100 to 95, 70, and 50%). Antigen requires blocking endogenous peroxidase activity by incubating the sample in a 3% hydrogen peroxide–methanol solution for ten minutes at 24 °C and washing with phosphate-buffered saline (PBS). Then, the antigen that was made with citrate buffer (pH 6.0) was incubated, according to the manufacturer’s instructions. Blocking non-specific binding with bovine serum albumin: 10% PBS was used to prevent the non-specific binding of antibodies. Tissue sections were incubated with the primary antibody ( Leptospira interrogans, Dilution Ratio: 1/50 ), specific for Leptospira antigens and incubated in a humid chamber at 24 °C for one hour. Secondary antibody incubation (Dilution Ratio: 1/500 ): After it was washed off twice, the primary antibody was incubated for 30 min with diluted biotinylated secondary antibodies. The secondary antibody binds to the primary antibody. Detection: The secondary antibodies were nonbiotinylated, and then the enzyme attached to avidin was incubated with the substrate to generate a precipitate visualized under optical microscopy. Counterstaining and mounting: Tissue sections were counterstained with hematoxylin to highlight cellular structures. The slides were mounted with a cover slip for examination under a microscope for the interpretation and quantification of antigen expression in the investigated tissue.
To confirm the clinical evaluation results, the SYBR Green qPCR assay was employed as suggested by Sripattanakul et al., 2022 . It targeted lipL32 , a gene found only in pathogenic Leptospira spp., including L. canicola . All samples were analyzed in triplicate by PCR amplification of the lipL32 gene (used for the detection of pathogenic Leptospira strains), along with a negative template control. Total genomic DNA was isolated in 65 biological samples from 50 mg of kidney tissue by using the NucleoSpin ® DNA Clean-Up XS kit (Machery-Nagel, Düren, Germany), according to the manufacturer’s protocol. The quality and quantity of isolated DNA were assessed by the spectrophotometric method using a NanoDrop8000 Instrument (Thermo Fisher Scientific, Waltham, MA, USA). An amount of 100 ng of DNA was used as a template in PCR reactions and amplified using the primers described in the literature : Lep F GGCGGCGCGTCTTAAACATG and Lep R TCCCCCCATTGAGCAAGATT, using the GoTaq ® qPCR Master Mix (Promega, Madison, WI, USA) with an ABI 7500 Real Time PCR System (Applied Biosystems, Waltham, MA, USA) in an experiment of present/absent type. The polymerase was activated at 95 °C for 2 min, followed by 45 cycles of denaturation at 95 °C for 15 s and annealing/extension at 60 °C. The melting curve was analyzed at 60–95 °C, 0.5 °C increments at 5 s/step. A cycle threshold (Ct) under the value of 38 was considered a positive result for pathogenic Leptospira spp.
The statistical test used for interval or continuous variables was the Kruskal–Wallis test (non-parametric). Frequencies were analyzed using Pearson’s chi-squared test. These statistical analyses were performed using SPSS Statistics for Windows, Version 17.0 (SPSS Inc., Chicago, IL, USA). Statistical significance was considered at p values of <0.05.
3.1. Clinical and Epidemiological Outcome The average age of the dogs was 7.28 ± 0.40 years. The study found no correlation between the age of the dogs and their reaction to Leptospira infection (t = 0.131, p = 0.896), according to the IHR test. Additionally, the study found no significant association between breed and Leptospira infection (χ 2 = 2.908, p = 0.714, ) in dogs that tested positive in the IHC exam. Gender was neither correlated with positive reactions to Leptospira (χ 2 = 0.074, p = 0.786, data from ) nor with the reproductive status of the dogs (χ 2 = 0.019, p = 0.890, data from ). The classes related to the immunological status of the dogs from appear to be associated with a positive IHC reaction to Leptospira infection (χ 2 = 18.961, p = 0.000). Leptospirosis and the vaccinated vs. unvaccinated status ( ) were significantly correlated (χ 2 = 19.164, p = 0.000). The risk of leptospirosis in the unvaccinated animals in our study was calculated at OR = 85.500 (95% CI, 6.82–1071.26, at p = 0.000). From the statistics presented in , it appears that unvaccinated dogs are at a higher risk of leptospirosis, according to the ICH exam. 3.2. Paraclinical Exams Typically, regarding the antibody status, a result was considered positive if the values were >1:800. Using the International Renal Interest Society (IRIS) staging system to categorize (as detailed in ) the severity of kidney disease based on elevated creatinine levels, the relevant biochemistry exams revealed the following in 65 dogs: 6 dogs exhibited severe renal azotemia (9.58 ± 0.75 mg/dL), 8 dogs had moderate renal azotemia (3.49 ± 0.29 mg/dL), 7 dogs showed mild renal azotemia (1.70 ± 0.08 mg/dL), and 25 dogs were diagnosed with non-azotemic kidney disease (0.88 ± 0.06 mg/dL). The differences were significant according to the IRIS classification (Kruskal–Wallis test,χ 2 = 37.317, p = 0.000). In terms of frequencies, based on immunohistochemistry, azotemia appeared to be associated with the presence of the leptospiral antigen (χ 2 = 23.846, p = 0.000, ). 3.3. Histopathological Examination As shown by the histopathological examination, 8/65 of dogs (12%— ) had acute interstitial nephritis (inflammatory infiltrate in the kidney interstitium), 14/56 (22%) had chronic interstitial nephritis (chronic inflammation in the renal interstitium), and 37/65 (57%) had glomerulonephritis (inflammation and damage to the glomerulus). Of these, Leptospira was found in 63%, 43%, and 38% at the IHC exam, but the statistical values do not correlate with the classes of renal pathology (χ 2 = 1.833, p = 0.608, ). Other lesions such as neoplasia, amyloidosis, congestion, bleeding, etc., were also identified. 3.4. Immunohistochemical Examination Overall, the results of the study showed that 42% (27/65) of dogs had renal pathology associated with Leptospira , according to the immunohistochemistry (IHC) method ( and and , , , and ). The positive cases in our study could not be associated with variables such as breed ( ), gender, and sterilization procedure ( ), or anatomopathological kidneypathology ( ). However, the disease, as confirmed by the IHC exam, was associated with azotemia levels ( ) and with leptospiral immunological status ( ). 3.5. Diagnostic Confirmation by qPCR Analyses A total of 65 canine samples were subjected to qPCR analysis to confirm postmortem diagnoses of Leptospira infection initially identified through immunohistochemistry (IHC) examinations. Out of these, the 29 samples that were found positive for Leptospira spp. in the IHC also yielded positive results in the qPCR analysis, with cycle threshold values consistently below 36. Also, two other samples that were not confirmed by IHC were considered positive, since the Ct values were under 38–36.4762 and 37.5219, respectively ( ). In detail, 16 of the 29 positive samples had Ct values below 27, indicating a more advanced stage of infection. The remaining 13 positive samples exhibited Ct values ranging from 27 to 37.5219, suggesting either a lower bacterial load in the kidney tissue collected or a less severe infection. Negative controls (NTCs) showed no amplification, confirming the accuracy of the qPCR method. Furthermore, melting curve analysis revealed no non-specific product formation or primer-dimer artefacts, underscoring the specificity and reliability of the DNA-based diagnostic approach. These results corroborate the initial IHC findings and demonstrate the high accuracy and reliability of qPCR in detecting pathogenic Leptospira spp. Our study indicates that qPCR not only identifies the presence of the pathogen more precisely than IHC, but also provides an assessment of infection severity. In the case of kidney tissue samples, using the IHC method, we identified 27 positive cases and 38 negative cases; whereas, by using the qPCR method, we identified 29 positive cases and 36 negative cases. If we assume the qPCR method has a sensitivity of 100% as cited in the literature , the IHC method produced twofalse-negative results. Therefore, in our study, the specificity of the IHC method can be calculated and would be 94.74% [36/(2 + 36)].
The average age of the dogs was 7.28 ± 0.40 years. The study found no correlation between the age of the dogs and their reaction to Leptospira infection (t = 0.131, p = 0.896), according to the IHR test. Additionally, the study found no significant association between breed and Leptospira infection (χ 2 = 2.908, p = 0.714, ) in dogs that tested positive in the IHC exam. Gender was neither correlated with positive reactions to Leptospira (χ 2 = 0.074, p = 0.786, data from ) nor with the reproductive status of the dogs (χ 2 = 0.019, p = 0.890, data from ). The classes related to the immunological status of the dogs from appear to be associated with a positive IHC reaction to Leptospira infection (χ 2 = 18.961, p = 0.000). Leptospirosis and the vaccinated vs. unvaccinated status ( ) were significantly correlated (χ 2 = 19.164, p = 0.000). The risk of leptospirosis in the unvaccinated animals in our study was calculated at OR = 85.500 (95% CI, 6.82–1071.26, at p = 0.000). From the statistics presented in , it appears that unvaccinated dogs are at a higher risk of leptospirosis, according to the ICH exam.
Typically, regarding the antibody status, a result was considered positive if the values were >1:800. Using the International Renal Interest Society (IRIS) staging system to categorize (as detailed in ) the severity of kidney disease based on elevated creatinine levels, the relevant biochemistry exams revealed the following in 65 dogs: 6 dogs exhibited severe renal azotemia (9.58 ± 0.75 mg/dL), 8 dogs had moderate renal azotemia (3.49 ± 0.29 mg/dL), 7 dogs showed mild renal azotemia (1.70 ± 0.08 mg/dL), and 25 dogs were diagnosed with non-azotemic kidney disease (0.88 ± 0.06 mg/dL). The differences were significant according to the IRIS classification (Kruskal–Wallis test,χ 2 = 37.317, p = 0.000). In terms of frequencies, based on immunohistochemistry, azotemia appeared to be associated with the presence of the leptospiral antigen (χ 2 = 23.846, p = 0.000, ).
As shown by the histopathological examination, 8/65 of dogs (12%— ) had acute interstitial nephritis (inflammatory infiltrate in the kidney interstitium), 14/56 (22%) had chronic interstitial nephritis (chronic inflammation in the renal interstitium), and 37/65 (57%) had glomerulonephritis (inflammation and damage to the glomerulus). Of these, Leptospira was found in 63%, 43%, and 38% at the IHC exam, but the statistical values do not correlate with the classes of renal pathology (χ 2 = 1.833, p = 0.608, ). Other lesions such as neoplasia, amyloidosis, congestion, bleeding, etc., were also identified.
Overall, the results of the study showed that 42% (27/65) of dogs had renal pathology associated with Leptospira , according to the immunohistochemistry (IHC) method ( and and , , , and ). The positive cases in our study could not be associated with variables such as breed ( ), gender, and sterilization procedure ( ), or anatomopathological kidneypathology ( ). However, the disease, as confirmed by the IHC exam, was associated with azotemia levels ( ) and with leptospiral immunological status ( ).
A total of 65 canine samples were subjected to qPCR analysis to confirm postmortem diagnoses of Leptospira infection initially identified through immunohistochemistry (IHC) examinations. Out of these, the 29 samples that were found positive for Leptospira spp. in the IHC also yielded positive results in the qPCR analysis, with cycle threshold values consistently below 36. Also, two other samples that were not confirmed by IHC were considered positive, since the Ct values were under 38–36.4762 and 37.5219, respectively ( ). In detail, 16 of the 29 positive samples had Ct values below 27, indicating a more advanced stage of infection. The remaining 13 positive samples exhibited Ct values ranging from 27 to 37.5219, suggesting either a lower bacterial load in the kidney tissue collected or a less severe infection. Negative controls (NTCs) showed no amplification, confirming the accuracy of the qPCR method. Furthermore, melting curve analysis revealed no non-specific product formation or primer-dimer artefacts, underscoring the specificity and reliability of the DNA-based diagnostic approach. These results corroborate the initial IHC findings and demonstrate the high accuracy and reliability of qPCR in detecting pathogenic Leptospira spp. Our study indicates that qPCR not only identifies the presence of the pathogen more precisely than IHC, but also provides an assessment of infection severity. In the case of kidney tissue samples, using the IHC method, we identified 27 positive cases and 38 negative cases; whereas, by using the qPCR method, we identified 29 positive cases and 36 negative cases. If we assume the qPCR method has a sensitivity of 100% as cited in the literature , the IHC method produced twofalse-negative results. Therefore, in our study, the specificity of the IHC method can be calculated and would be 94.74% [36/(2 + 36)].
Because all available diagnostic tests have limitations, the application of a combination of serologic assays and organism detection tests is recommended to optimize diagnosis of leptospirosis . In the case of follow-up kidney dog samples, immunohistochemistry and molecular biology (qPCR) diagnosis methods are recommended [ , , ]. Sykes et al. (2023), in the American College of Veterinary Internal Medicine (ACVIM) consensus statement on leptospirosis in dogs, established clinical and laboratory criteria for confirming a case. In accordance with the consensus statement, a confirmed case must meet the clinical criteria and at least one of the following confirmatory laboratory criteria: (i) a fourfold or higher increase in Leptospira agglutination titer between acute-and convalescent-phase serum specimens at a single laboratory, (ii) the detection of pathogenic leptospires in blood using a nucleic acid amplification test (NAAT), or (iii) isolation of Leptospira from a clinical specimen by a Leptospira reference laboratory . The presence of anti-Leptospira antibodies against one or more serotypes shows that there was exposure to Leptospira [ , , , , ]. The results obtained were compared with the PCR test and the serological examination, which showed that in the urine tests, about 8% (5 cases out of 65) were positive, although only about 3% (2 cases out of 65) of the dogs studied displayed clinical signs [ , , ]. A proportion of the dogs (27/65, or 42%) tested positive in the IHC, displaying a reaction to Leptospira infection. The positive immunohistochemical result is explained by the persistence of the leptospiric antigen for a longer time. The use of the immunohistochemical method has been shown to be effective in identifying leptospiral antigens, and it is the only technique that can be applied currently in legal veterinary diagnostic laboratories. However, serotypes in the renal tissue cannot be determined by the immunohistochemical method, because the antibodies produce cross-reactions between serotypes. Thus, serum antibody titers from infected dogs cannot provide information regarding infected leptospiral serotypes or vaccinal leptospiral fragments. All leptospiral vaccines contain whole or fragmented inactivated leptospiral organisms. Immunohistochemistry has been shown to have sensitivity and specificity for Leptospira similar to the silver staining of renal tissue. Some authors have suggested that the role of leptospirosis in chronic interstitial nephritis is unclear, because the evolution of histopathological changes in subacute and chronic forms cannot be observed. This uncertainty is explained by the absence of antibody titers and the non-identification of leptospires in histopathological examinations. Even though IHC is a powerful diagnostic tool, its application in detecting leptospirosis in dogs can be limited by several factors. These include antigen availability , antibiotic use , sample quality, antibody specificity, technical demands, and individual variations in immune response . These limitations must be carefully considered to interpret the IHC results accurately. For all the above-mentioned reasons, we suspect that the IHC results were perhaps influenced by undeclared antibiotic therapy. On the other hand, rapid diagnosis of leptospirosis can be difficult without adequate expertise and it is often delayed due to the time needed to obtain results. Polymerase chain reactions (PCRs) are methods for real-time detection of amplified PCR products, which provides a better diagnosis of culture and serology [ , , ]. Nowadays, there are multiple real-time PCR (qPCR) methods for detecting Leptospira , but not all of them can distinguish pathogenic and non-pathogenic species . Furthermore, various probe technologies and qPCR instruments are used for these tests [ , , , , , , ], including kidney samples . Chandan 2016 developed a sensitive PCR assay targeting a specific sequence from the Leptospira canicola , which detected as few as ten bacteria and was suitable for diagnosing leptospirosis in humans. Flores 2020 presented a protocol for rapidly detecting leptospiral DNA in environmental water using a TaqMan-based qPCR targeting the lipl32 gene, which was specific to pathogenic Leptospira spp. These findings collectively demonstrated the utility of RT-qPCR in detecting Leptospira spp. Compared to the conventional methods, the advantages of the qPCR detection method are that it is fast, it reduces the chances of contamination, it is specific and sensitive, and it has a high through put [ , , , , ]. The qPCR assay has been found to detect as low as 10 2 and 10 3 bacteria/mL of pure culture, whole blood, plasma, and serum samples targeting the lipL32 gene regions . Our study demonstrates that qPCR is a robust and specific method for postmortem diagnosis of Leptospira spp. infection in dogs, offering higher specificity and reliability compared to traditional IHC methods, which showed 94.74% specificity in our case. In addition, the advantages of qPCR for detecting Leptospira lie in its sensitivity, specificity, speed, quantitative capability, automation, versatility, early detection potential, robustness, and molecular typing capability. These features make qPCR a valuable tool in both clinical and research settings for diagnosing and studying leptospirosis.
Immunohistochemical examination has been shown to be effective in identifying leptospiral antigens, and it is the only technique that can be applied currently. Serotypes in the renal tissue cannot be determined by the immunohistochemical method, because the antibodies produce cross-reactions between serotypes, and results may be influenced by other factors. Consequently, the IHC and real-time PCR (qPCR) method shave the potential to increase the accuracy of Leptospira detection and postmortem diagnosis.
|
Infection of Brain Pericytes Underlying Neuropathology of COVID-19 Patients | 90b70bcd-a1c5-4b75-b921-18d7e06b8b5c | 8583965 | Anatomy[mh] | The clinical manifestations of coronavirus disease 2019 (COVID-19) infection primarily include respiratory symptoms, ranging from a mild cough to severe bilateral pneumonia . However, SARS-CoV-2 bears an organotropism beyond the respiratory tract , with increasing testimony indicating the brain as an extrapulmonary target of SARS-CoV-2 . The involvement of the central nervous system (CNS) encompasses a broad spectrum of neurological manifestations (including headache, fatigue, anosmia, ageusia, confusion, and loss of consciousness), often representing an ulterior clinical morbidity that significantly contributes to COVID-19-related deaths [ , , ]. The main entry receptor for SARS-CoV-2 is reported to be the angiotensin-converting enzyme 2 (ACE2), which is a component of the renin–angiotensin system . To date, there is still no conclusive evidence concerning the localization of ACE2 in the human CNS , and the mechanism of SARS-CoV-2 infection in the brain remains a conundrum. Here, using highly sensitive multiplexed immunohistochemistry (mIHC) of brain tissue from a series of confirmed COVID-19 patients and corresponding controls, we determined that ACE2 is exclusively expressed by brain pericytes in the subset of patients that also exhibited neurological symptoms. Moreover, spatial immunophenotyping revealed a localized perivascular inflammation in brain tissue from COVID-19 patients, paralleled by an impairment of the functionality of the vascular wall as indicated by loss of integrity of the blood–brain barrier (BBB). Finally, in the cerebrospinal fluid (CSF) of a cohort of COVID-19 patients with neurological involvement, levels of soluble PDGFRβ, a pericyte-specific marker in the brain, were significantly reduced compared with non-COVID-19 individuals, suggestive of SARS-CoV-2-related functional impairment of pericytes. Taken together, our findings highlight a previously unappreciated role for brain pericytes in acting as pioneers for SARS-CoV-2 entry into the CNS.
2.1. The ACE2 Receptor Is Expressed by Pericytes in Murine and Human Brains Expression of ACE2 in the brain has variably been reported in neurons, glial cells including astrocytes, and vascular cells [ , , , , ]. Because of this ambiguity of localization, we started by exploring ACE2 expression in publicly available mRNA and protein datasets from murine and human brains. Mining of the Allen Mouse Brain Atlas of single-cell transcriptomes demonstrated unique enrichment for Ace2 transcript in pericytes ( A). A similar compartmentalization was observed in the Tabula Muris and in a single-cell RNA sequencing (scRNA-seq) compendium of the murine brain vasculature ( ). In agreement with the transcriptional data, localization of the ACE2 protein by the Human Protein Atlas was restricted to the perivascular compartment in a subset of blood vessels in the human cerebral cortex ( ). 2.2. The ACE2 Protein Is Expressed by Perivascular Cells of Neural Tissue from COVID-19 Patients with Neurological Symptoms Next, we sought to investigate the expression of ACE2 in the brain tissue of COVID-19 patients. To this end, we obtained FFPE samples of multiple brain regions from six patients whose death was confirmed to be a consequence of SARS-CoV-2 infection and from seven control cases ( ). In the frontal cortex, moderate to high ACE2 immunoreactivity revealed a vascular pattern in a subset of blood vessels in 5 of the 13 cases ( B). Reassuringly, other brain regions showed an equivalent distribution of ACE2, indicating that ACE2 was widely expressed in perivascular cells throughout the CNS ( C). Notably, ACE2 reactivity, which was confirmed with two different antibodies in positive control tissues from the kidney ( ), appeared to be a patient-specific feature, since some cases did not show positivity at all, or showed signals with very low frequency ( D and ). To conclusively validate which cell type harbored ACE2 expression, we performed mIHC on human brain tissue to simultaneously visualize ACE2, CD31 + endothelial cells, and PDGFRβ + pericytes. ACE2 expression coincided with that of PDGFRβ, but not with CD31 staining ( E and ). Pericytes investing the vasculature exhibited a nuanced pattern of PDGFRβ and ACE2 immunoreactivity, with some cells bearing positivity solely for PDGFRβ, while other perivascular cells simultaneously expressed both PDGFRβ and ACE2 markers. Remarkably, the three COVID-19 patients that exhibited moderate to high perivascular ACE2 expression in the brain all presented with neurological symptoms, while all ACE2-negative patients remained free from such manifestations ( D). Collectively, our data demonstrate that in the brain, ACE2 is exclusively expressed by pericytes in a manner that signifies the development of neurological symptoms from COVID-19. 2.3. SARS-CoV-2 Is Detectable in the Human Brain of COVID-19 Patients An increasing body of evidence converges on the inherent difficulty of detecting SARS-CoV-2 in the brain . To build on previous reports on the localization of SARS-CoV-2 in human brain tissue, we additionally analyzed brain samples from noninfected individuals to enable conclusions about the presence of the spike protein or the nucleocapsid protein of SARS-CoV/SARS-CoV-2 in the CNS with a higher certainty. For both viral components, positive areas in brain sections of COVID-19 patients exhibited patterns comparable with those shown in previous studies . Notably, however, we demonstrated an analogous intensity and distribution of the viral proteins when we probed brain tissues from noninfected individuals ( A). In order to unequivocally define our ability to visualize viral particles in human tissues, we gained access to placental tissue from a confirmed case of SARS-CoV-2 vertical transmission to serve as a positive control . We also made use of the J2 antibody specifically designed to detect viral double-stranded (ds)RNA. In the placenta, a 7-plex mIHC panel confirmed the epithelial cytokeratin + syncytiotrophoblasts as the main target for viral infection by virtue of expression of ACE2 and the presence of dsRNA in a well-defined dotted pattern ( B and ), a pattern of distribution which was essentially preserved with antibodies against the Coronaviridae family or SARS-CoV-2-specific antigens ( ). Finally, applying the now-validated protocol for detection of viral dsRNA to brain sections, we identified an analogous dotted pattern in discrete perivascular, non-endothelial, cells in the brain of COVID-19 patients ( C and ). Reassuringly, the perivascular staining pattern was absent from brain samples of noninfected individuals. Together with our observations of ACE2 expression in pericytes, our conclusive localization of viral dsRNA suggests that brain pericytes are indeed uniquely susceptible to viral infection and may serve as CNS entry points for SARS-CoV-2. 2.4. Perivascular Infection by SARS-CoV-2 in the Brain Is Paralleled by Perivascular Inflammation We hypothesized that infection of pericytes would result in neuroinflammation and therefore implemented a spatial immunophenotyping approach for the concomitant detection of the endothelium (CD34 + ) and five immune cell populations, including T helper and cytotoxic T lymphocytes, regulatory T cells, B cells, and macrophages. Surrounding the brain vasculature in COVID-19 patients, we detected CD4 + and CD8 + T cells, as well as CD68 + macrophages, indicative of perivascular inflammation, rather than widespread neuroinflammation in the brain parenchyma ( D and ). The immune infiltration did not affect all blood vessels, indicating that the inflammation was not the result of systemic mediators, but rather of local instigation. 2.5. Pericyte Infection Leads to Vascular Fibrinogen Leakage in the CNS Next, we investigated whether impaired pericyte function subsequent to SARS-CoV-2 infection and the perivascular inflammation impinged on the integrity of the vascular wall. We first performed a 7-plex mIHC staining focusing on the permeability of the neurovascular unit. Remarkably, in COVID-19 patients, extravascular fibrinogen was readily detected as a characteristic gradient in subsets of vessels, occasionally also characterized by ACE2 expression and the presence of viral dsRNA ( A and ). Conversely, fibrinogen was fully retained within the blood vessels of noninfected control cases. Moreover, astrocyte priming indicative of local activation of the brain parenchyma was not apparent during COVID-19 infection ( B and ). Together with our identification of SARS-CoV-2 and immune cell infiltrates in the perivascular region, the leakage of fibrinogen from the blood vessels strongly suggests that viral infection of pericytes breaches the tightly organized BBB. 2.6. Shedding of PDGFRβ into the CSF Is Reduced in COVID-19 Patients Our findings led us to speculate that the homeostatic state of brain pericytes would be disrupted in COVID-19 patients. Therefore, we collected CSF from an additional eight patients with acute COVID-19 that presented with neurological manifestations, as well as noninfected matched controls ( ). Intriguingly, the soluble level of the pericyte marker sPDGFRβ in the CSF of COVID-19 patients was on average significantly lower than that in non-COVID-19 control individuals as measured by ELISA, indicative of a perturbed pericyte homeostasis ( C).
Expression of ACE2 in the brain has variably been reported in neurons, glial cells including astrocytes, and vascular cells [ , , , , ]. Because of this ambiguity of localization, we started by exploring ACE2 expression in publicly available mRNA and protein datasets from murine and human brains. Mining of the Allen Mouse Brain Atlas of single-cell transcriptomes demonstrated unique enrichment for Ace2 transcript in pericytes ( A). A similar compartmentalization was observed in the Tabula Muris and in a single-cell RNA sequencing (scRNA-seq) compendium of the murine brain vasculature ( ). In agreement with the transcriptional data, localization of the ACE2 protein by the Human Protein Atlas was restricted to the perivascular compartment in a subset of blood vessels in the human cerebral cortex ( ).
Next, we sought to investigate the expression of ACE2 in the brain tissue of COVID-19 patients. To this end, we obtained FFPE samples of multiple brain regions from six patients whose death was confirmed to be a consequence of SARS-CoV-2 infection and from seven control cases ( ). In the frontal cortex, moderate to high ACE2 immunoreactivity revealed a vascular pattern in a subset of blood vessels in 5 of the 13 cases ( B). Reassuringly, other brain regions showed an equivalent distribution of ACE2, indicating that ACE2 was widely expressed in perivascular cells throughout the CNS ( C). Notably, ACE2 reactivity, which was confirmed with two different antibodies in positive control tissues from the kidney ( ), appeared to be a patient-specific feature, since some cases did not show positivity at all, or showed signals with very low frequency ( D and ). To conclusively validate which cell type harbored ACE2 expression, we performed mIHC on human brain tissue to simultaneously visualize ACE2, CD31 + endothelial cells, and PDGFRβ + pericytes. ACE2 expression coincided with that of PDGFRβ, but not with CD31 staining ( E and ). Pericytes investing the vasculature exhibited a nuanced pattern of PDGFRβ and ACE2 immunoreactivity, with some cells bearing positivity solely for PDGFRβ, while other perivascular cells simultaneously expressed both PDGFRβ and ACE2 markers. Remarkably, the three COVID-19 patients that exhibited moderate to high perivascular ACE2 expression in the brain all presented with neurological symptoms, while all ACE2-negative patients remained free from such manifestations ( D). Collectively, our data demonstrate that in the brain, ACE2 is exclusively expressed by pericytes in a manner that signifies the development of neurological symptoms from COVID-19.
An increasing body of evidence converges on the inherent difficulty of detecting SARS-CoV-2 in the brain . To build on previous reports on the localization of SARS-CoV-2 in human brain tissue, we additionally analyzed brain samples from noninfected individuals to enable conclusions about the presence of the spike protein or the nucleocapsid protein of SARS-CoV/SARS-CoV-2 in the CNS with a higher certainty. For both viral components, positive areas in brain sections of COVID-19 patients exhibited patterns comparable with those shown in previous studies . Notably, however, we demonstrated an analogous intensity and distribution of the viral proteins when we probed brain tissues from noninfected individuals ( A). In order to unequivocally define our ability to visualize viral particles in human tissues, we gained access to placental tissue from a confirmed case of SARS-CoV-2 vertical transmission to serve as a positive control . We also made use of the J2 antibody specifically designed to detect viral double-stranded (ds)RNA. In the placenta, a 7-plex mIHC panel confirmed the epithelial cytokeratin + syncytiotrophoblasts as the main target for viral infection by virtue of expression of ACE2 and the presence of dsRNA in a well-defined dotted pattern ( B and ), a pattern of distribution which was essentially preserved with antibodies against the Coronaviridae family or SARS-CoV-2-specific antigens ( ). Finally, applying the now-validated protocol for detection of viral dsRNA to brain sections, we identified an analogous dotted pattern in discrete perivascular, non-endothelial, cells in the brain of COVID-19 patients ( C and ). Reassuringly, the perivascular staining pattern was absent from brain samples of noninfected individuals. Together with our observations of ACE2 expression in pericytes, our conclusive localization of viral dsRNA suggests that brain pericytes are indeed uniquely susceptible to viral infection and may serve as CNS entry points for SARS-CoV-2.
We hypothesized that infection of pericytes would result in neuroinflammation and therefore implemented a spatial immunophenotyping approach for the concomitant detection of the endothelium (CD34 + ) and five immune cell populations, including T helper and cytotoxic T lymphocytes, regulatory T cells, B cells, and macrophages. Surrounding the brain vasculature in COVID-19 patients, we detected CD4 + and CD8 + T cells, as well as CD68 + macrophages, indicative of perivascular inflammation, rather than widespread neuroinflammation in the brain parenchyma ( D and ). The immune infiltration did not affect all blood vessels, indicating that the inflammation was not the result of systemic mediators, but rather of local instigation.
Next, we investigated whether impaired pericyte function subsequent to SARS-CoV-2 infection and the perivascular inflammation impinged on the integrity of the vascular wall. We first performed a 7-plex mIHC staining focusing on the permeability of the neurovascular unit. Remarkably, in COVID-19 patients, extravascular fibrinogen was readily detected as a characteristic gradient in subsets of vessels, occasionally also characterized by ACE2 expression and the presence of viral dsRNA ( A and ). Conversely, fibrinogen was fully retained within the blood vessels of noninfected control cases. Moreover, astrocyte priming indicative of local activation of the brain parenchyma was not apparent during COVID-19 infection ( B and ). Together with our identification of SARS-CoV-2 and immune cell infiltrates in the perivascular region, the leakage of fibrinogen from the blood vessels strongly suggests that viral infection of pericytes breaches the tightly organized BBB.
Our findings led us to speculate that the homeostatic state of brain pericytes would be disrupted in COVID-19 patients. Therefore, we collected CSF from an additional eight patients with acute COVID-19 that presented with neurological manifestations, as well as noninfected matched controls ( ). Intriguingly, the soluble level of the pericyte marker sPDGFRβ in the CSF of COVID-19 patients was on average significantly lower than that in non-COVID-19 control individuals as measured by ELISA, indicative of a perturbed pericyte homeostasis ( C).
The primary cellular receptor for SARS-CoV-2 entry is ACE2 , but the expression pattern of ACE2 in the CNS has not been conclusively resolved. Notably, the few published studies detailing the expression of ACE2 and/or SARS-CoV-2 protein in the CNS lack reliable and appropriate controls, precluding firm conclusions. Here, by means of highly sensitive mIHC and the use of both positive and negative control tissues, we were able to confirm that ACE2 exhibited an exclusive perivascular expression pattern in the CNS. Similarly, viral particles and their dsRNA were observed in CNS pericytes in COVID-19 patients, independently of the perivascular ACE2 expression status. Whether other coreceptors for SARS-CoV-2, including TMPRSS2, CD147, and neuropilin-1, contribute to CNS tropism remains to be investigated. Based on our observations, we hypothesize that infection and subsequent damage of brain vascular pericytes by SARS-CoV-2 and perivascular inflammation may lead to impairment of the BBB, instigating neurological complications and possibly virus entry into the CNS. In line with our report, two recent studies observed vascular leakage and perivascular immune infiltration in the brain of COVID-19 patients, but without the crucial link to ACE2 expression by, and infection of, pericytes . However, it is still an outstanding question whether SARS-CoV-2 is overtly neurotropic or if the neurological symptoms associated with COVID-19 are secondary to events related to the systemic host response . Although solely based on the comparable abundance of GFAP (a marker for activated astrocytes) in the tissues, our observations do not provide support for the hypothesis of a cytokine storm. However, increased levels of GFAP have been detected in the plasma of COVID-19 patients . Nevertheless, immune activation markers β2-microglobulin and neopterin were previously found to be elevated in the CSF of COVID-19 patients . In addition, a recent scRNA-seq study on the brains of eight COVID-19 patients revealed an increase in inflammatory genes. More importantly, the observed inflammation of the BBB did not require an active viral infection, possibly explaining our inability to detect SARS-CoV-2 in all COVID-19 cases . Alternative to a cytokine storm, an enhanced inflammatory response could be triggered by metabolic manipulation of mitochondria that are hijacked by the SARS-CoV-2 infection . Hence, further investigations are warranted to fully clarify whether a systemic inflammatory response is associated with neurological manifestations of COVID-19. Intriguingly, COVID-19 patients with neurological symptoms presented with a reduced concentration of pericyte-derived sPDGFRβ in the CSF. While our mIHC of brain tissue demonstrated a surprisingly variable occurrence of PDGFRβ + perivascular cells, in line with the results from the CSF analysis, the analysis did not support an overall diminished pericyte coverage of the vasculature of COVID-19 patients. A second, and perhaps more likely, explanation for the reduced expression/shedding of PDGFRβ in COVID-19 patients is that SARS-CoV-2 infection of pericytes diverted the protein synthesis machinery to produce viral proteins, leading to loss of endogenous marker expression and consequential functional impairment. An improved understanding of SARS-CoV-2 neurotropism is urgently needed to guide the clinical management of acute neurological symptoms, as well as to define strategies to prevent postinfectious neurological complications. We propose that a possible entry site of SARS-CoV-2 into the CNS goes through ACE2-expressing pericytes. Interestingly, although overt endothelial cell infection by SARS-CoV-2 does not appear to occur , a recent investigation determined that radiolabeled spike viral protein could be retained on the abluminal side of endothelial cells where it is associated with the capillary glycocalyx in mice or further sequestered by the endothelium . It is thus tempting to speculate that this represents one plausible way to expose pericytes to the viral infection. Furthermore, the absence of brain pericytes in mice results in a disrupted BBB associated with widespread loss of integrity . Conversely, sealing of the BBB following thrombolysis after ischemic stroke has been achieved in clinical trials by treatment with the tyrosine kinase inhibitor imatinib . Whether similar interventions aiming to support the integrity of the BBB would alleviate neurological symptoms in COVID-19 patients warrants further studies.
4.1. Patients Excessive brain tissues sampled from six COVID-19 autopsies and seven non-COVID-19 cases were used to create formalin-fixed paraffin-embedded (FFPE) blocks ( ). The use of these samples was approved by the Central Ethical Review Authority in Sweden (2020-02369, 2020-06582, and 2020-01771). Clinical data with details of neurologic symptoms or other signs of brain affection were sought in the referral documents or else in the Regional Medical Records database Melior, which was used also for the diagnostic work-up. CSF from eight patients with neurological manifestations admitted to the Sahlgrenska University Hospital in Gothenburg, Sweden, was included ( ). Infection with SARS-CoV-2 was confirmed via RT-PCR analysis. Age- and sex-matched non-COVID-19 controls were selected, consisting of patients who were examined because of clinical suspicion of neurological disease, but where no neurochemical evidence was found, based on clinical reference intervals. The use of these samples has been approved by the Regional Ethical Committee in Gothenburg. 4.2. Bioinformatics Data Access and Analysis Expression of Ace2 was investigated in publicly available scRNA-seq SMART-Seq2 libraries on FACS-sorted non-myeloid brain cells of seven mice ( Tabula Muris ) and in a database of murine vasculature . Mouse whole brain and hippocampus SMART-seq data (gene expression aggregated per cluster, calculated as trimmed means) from the Allen Brain Atlas consortium was downloaded on 14 October 2020 . For expression of Pvalb and Sst neurons, the average was calculated of 13 and 40 cell clusters, respectively. Human ACE2 protein expression images were retrieved from the Human Protein Atlas initiative (Version 20.0) . 4.3. Immunohistochemistry (IHC) Five-micrometer-thick FFPE tissue sections were dewaxed and rehydrated through xylene and water-based ethanol solutions. Heat-induced epitope retrieval was performed with a pressure cooker (2100 Antigen Retriever, BioVendor, Brno, Czech Republic) in citrate or Tris-EDTA buffer (Agilent Dako, Santa Clara, CA, USA). Following endogenous peroxidase quenching (BLOXALL, Vector Laboratories, Burlingame, CA, USA), tissues were incubated with CAS-block (Thermo Fisher Scientific, Waltham, MA, USA) for 1 h at room temperature (RT) and Ultra V block (Thermo Fisher Scientific, Waltham, MA, USA) for 5 min. Primary antibodies ( ) diluted in CAS-block were applied for 30 min, followed by UltraVision ONE HRP polymer (Thermo Fisher Scientific, Waltham, MA, USA) for 30 min, at RT. The ImmPACT DAB substrate (Vector Laboratories, Burlingame, CA, USA) was applied. Tissues were counterstained with hematoxylin, dehydrated, and mounted with Cytoseal 60 (Thermo Fisher Scientific, Waltham, MA, USA). Imaging was performed with an automated BX63 microscope connected to a DP-80 camera (Olympus, Tokyo, Japan). 4.4. Multiplexed IHC (mIHC) FFPE sections used for IHC were subjected to multiplexed labeling following optimized protocols established in the lab. All materials were from Akoya Biosciences (USA), including the Vectra Polaris scanner for imaging and the PhenoChart/InForm software. Following slide preparation, sections underwent staining cycles ( )—including blocking, primary antibody incubation, HRP tagging, and labeling with OPAL-conjugated tyramide substrate—and a stripping procedure to remove unbound primary antibody/HRP. A counterstain with DAPI preceded the mounting with ProLong Diamond antifade (Thermo Fisher Scientific, Waltham, MA, USA). The composite images were generated by removing inherent autofluorescence signal from an unstained section, as well as by comparing fluorescence intensities to those of a spectral library. 4.5. Soluble PDGFRβ ELISA sPDGFRβ concentration in the CSF was measured by sandwich ELISA (Thermo Fisher Scientific, Waltham, MA, USA), as previously described . Statistical Mann–Whitney U-test was performed using Prism (GraphPad Software, San Diego, CA, USA). The significance level was set at p < 0.05, two-sided.
Excessive brain tissues sampled from six COVID-19 autopsies and seven non-COVID-19 cases were used to create formalin-fixed paraffin-embedded (FFPE) blocks ( ). The use of these samples was approved by the Central Ethical Review Authority in Sweden (2020-02369, 2020-06582, and 2020-01771). Clinical data with details of neurologic symptoms or other signs of brain affection were sought in the referral documents or else in the Regional Medical Records database Melior, which was used also for the diagnostic work-up. CSF from eight patients with neurological manifestations admitted to the Sahlgrenska University Hospital in Gothenburg, Sweden, was included ( ). Infection with SARS-CoV-2 was confirmed via RT-PCR analysis. Age- and sex-matched non-COVID-19 controls were selected, consisting of patients who were examined because of clinical suspicion of neurological disease, but where no neurochemical evidence was found, based on clinical reference intervals. The use of these samples has been approved by the Regional Ethical Committee in Gothenburg.
Expression of Ace2 was investigated in publicly available scRNA-seq SMART-Seq2 libraries on FACS-sorted non-myeloid brain cells of seven mice ( Tabula Muris ) and in a database of murine vasculature . Mouse whole brain and hippocampus SMART-seq data (gene expression aggregated per cluster, calculated as trimmed means) from the Allen Brain Atlas consortium was downloaded on 14 October 2020 . For expression of Pvalb and Sst neurons, the average was calculated of 13 and 40 cell clusters, respectively. Human ACE2 protein expression images were retrieved from the Human Protein Atlas initiative (Version 20.0) .
Five-micrometer-thick FFPE tissue sections were dewaxed and rehydrated through xylene and water-based ethanol solutions. Heat-induced epitope retrieval was performed with a pressure cooker (2100 Antigen Retriever, BioVendor, Brno, Czech Republic) in citrate or Tris-EDTA buffer (Agilent Dako, Santa Clara, CA, USA). Following endogenous peroxidase quenching (BLOXALL, Vector Laboratories, Burlingame, CA, USA), tissues were incubated with CAS-block (Thermo Fisher Scientific, Waltham, MA, USA) for 1 h at room temperature (RT) and Ultra V block (Thermo Fisher Scientific, Waltham, MA, USA) for 5 min. Primary antibodies ( ) diluted in CAS-block were applied for 30 min, followed by UltraVision ONE HRP polymer (Thermo Fisher Scientific, Waltham, MA, USA) for 30 min, at RT. The ImmPACT DAB substrate (Vector Laboratories, Burlingame, CA, USA) was applied. Tissues were counterstained with hematoxylin, dehydrated, and mounted with Cytoseal 60 (Thermo Fisher Scientific, Waltham, MA, USA). Imaging was performed with an automated BX63 microscope connected to a DP-80 camera (Olympus, Tokyo, Japan).
FFPE sections used for IHC were subjected to multiplexed labeling following optimized protocols established in the lab. All materials were from Akoya Biosciences (USA), including the Vectra Polaris scanner for imaging and the PhenoChart/InForm software. Following slide preparation, sections underwent staining cycles ( )—including blocking, primary antibody incubation, HRP tagging, and labeling with OPAL-conjugated tyramide substrate—and a stripping procedure to remove unbound primary antibody/HRP. A counterstain with DAPI preceded the mounting with ProLong Diamond antifade (Thermo Fisher Scientific, Waltham, MA, USA). The composite images were generated by removing inherent autofluorescence signal from an unstained section, as well as by comparing fluorescence intensities to those of a spectral library.
sPDGFRβ concentration in the CSF was measured by sandwich ELISA (Thermo Fisher Scientific, Waltham, MA, USA), as previously described . Statistical Mann–Whitney U-test was performed using Prism (GraphPad Software, San Diego, CA, USA). The significance level was set at p < 0.05, two-sided.
|
Comparison of an Artificial Intelligence–Enabled Patient Decision Aid vs Educational Material on Decision Quality, Shared Decision-Making, Patient Experience, and Functional Outcomes in Adults With Knee Osteoarthritis | 3f2e97b2-4e1e-49eb-b70c-779044e7acc8 | 7893500 | Patient Education as Topic[mh] | Osteoarthritis (OA) of the knee has increased in prevalence and now represents a major public health concern and driver of health care spending. , , , , Treatments for knee OA range from activity modification, weight loss, physical therapy, and oral analgesics to joint injections and joint replacement surgery. The presence of multiple treatment options highlights the preference-sensitive nature of OA management and an opportunity for shared decision-making (SDM). , SDM is a concept that integrates effective communication and clinician-patient relationship building to understand patient preferences, values, and needs, with the transfer of knowledge regarding treatments, risks, benefits, and alternatives prior to making informed decisions. , Simultaneously, there is growing interest in incorporating patient-reported outcome measurements (PROMs) in the decision-making process. , PROMs quantify physical, emotional, and social aspects of health from the patient’s perspective. , These tools have revolutionized patient outcomes research and are increasingly used at the point-of-care for clinical decision support. , , , Baseline PROM scores can estimate postoperative outcomes when measured against scoring thresholds indicating whether patients are more or less likely to experience clinically meaningful improvement after total knee replacement (TKR)—a procedure consistently providing pain relief, functional restoration, and quality of life (QoL) improvement for advanced OA. , This function of PROMs can be augmented by artificial intelligence (AI) and machine learning to synthesize complex relationships within large data sets. , , Combining the analytical power of machine learning with clinical and patient-generated data can provide personalized estimations of health outcomes and minimize guesswork during decision-making. We sought to evaluate an AI-enabled patient decision aid (Joint Insights, OM1) delivering patient education, an interactive preferences assessment, and personalized outcome reports generated by a machine learning algorithm using a large national data set. The primary objective of this study was to evaluate how an AI-enabled decision aid (intervention group) affected decision quality for patients with knee OA considering TKR compared with the provision of digital patient education and usual care alone (control group). Secondarily, this study quantified differences between intervention and control groups on the patient’s perspective of SDM during the clinical encounter, consultation satisfaction, change in functional outcome, consultation duration, TKR rates, and treatment concordance.
Trial Design We performed a parallel randomized clinical trial with a 1:1 allocation between cohorts at a musculoskeletal integrated practice unit (IPU) in an academic center in the US serving a diverse population. Two orthopedic surgeons work with a coordinated multidisciplinary team—including an advanced practice health professional, physical therapist, behavioral therapy–trained social worker, and nutritionist—in a collocated outpatient facility. Services include structured exercise programs, imaging, joint injections, weight loss counseling, dietary advice, social support, smoking and alcohol cessation, behavioral therapies, pain management, and surgery where appropriate. PROMs are collected prior to or on arrival in clinic and at follow-up time points as a standard of care. The study was reviewed and approved by the University of Texas at Austin, Dell Medical School institutional review board, and verbal informed consent was obtained from participants. This study followed the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. The trial protocol appears in . Participants A total of 129 patients referred with presumptive knee OA and candidacy for primary TKR were recruited between March 2019 and January 2020. Study subjects were identified during the preclinic team meeting (ie, team huddle). Adult patients aged between 45 and 89 years, fluent in English or Spanish, with body mass index (BMI; calculated as weight in kilograms divided by height in meters squared) between 20 and 46, a primary diagnosis of advanced knee OA (radiographic Kellgren-Lawrence [KL] grade 3 or 4 where grade 0 represents no OA, grade 4 severe OA), baseline Knee Injury and Osteoarthritis Outcome Score for Joint Replacement (KOOS JR) between 0 and 85, able to provide informed consent, and medically fit for TKR were included. Age, BMI, and KOOS JR limits were set based on the estimation model. Exclusion criteria disqualified patients with knee problems not primarily related to OA (eg, trauma, inflammatory arthropathy), advanced OA affecting 1 or both knees with need for care of a different joint problem, prior OA management by an orthopedic specialist, and/or any prior lower extremity total joint replacement. Intervention The intervention was an AI-enabled patient decision aid incorporating 3 modules (ie, education, preferences, and personalized outcomes) within a software platform ( ). The education module incorporated an overview of the natural history of OA, evidence-based nationally recognized treatment recommendations, , comparisons of treatment alternatives, risks and benefits, and a knowledge test developed using guidance from the surgeon authors (K.J.B., J.A., P.J.). The preferences module included ratings for desired levels of pain relief, commitment to postoperative recovery, and willingness to accept surgical risk on a continuum of nonoperative to operative care. Patients also rated whether they gained sufficient knowledge, their self-awareness of their preferences, and their confidence with the level of support received during decision-making. The outcomes module included a personalized report incorporating estimated probabilities of benefits, risks (complications), and likelihood of improvement in joint pain, stiffness, and QoL following TKR, alongside a summary of preferences and education modules. All content was available in English or Spanish, and the decision aid was deployed after a period of familiarization, testing within the clinical setting, and discussions about its fidelity between the clinical team and the company. Outcomes The primary outcome was the decision process score of the knee decision quality instrument (K-DQI) questions 3.1 through 3.5 (eFigure in ) ( ). Secondary outcomes included the level of SDM (assessed using the CollaboRATE survey), patient satisfaction with the consultation (numerical rating scale [NRS]), condition-specific symptoms and functional limitations (KOOS JR), duration of consultation in minutes, TKR rates (proportion of patients undergoing surgery), and treatment concordance (K-DQI question 1.6) (eFigure in ). All outcomes were assessed at the end of the clinical visit except KOOS JR and treatment concordance, which were assessed prior to the consultation and again at a follow-up appointment 4 to 6 months from initial consultation or date of TKR, as applicable. No changes to trial outcomes were made after study commencement. Procedures All patients completed baseline PROMs after registration that included KOOS JR assessment, a Patient Reported Outcome Measurement Instrumentation System (PROMIS) Global-10 questionnaire, Generalized Anxiety Disorder screener (GAD-2) and full measure (GAD-7) as indicated, and the Patient Health Questionnaire screener (PHQ-2) and full measure (PHQ-9) as indicated. After escort to the clinic room, patients met the research assistant who provided study information and obtained verbal informed consent. We randomized eligible patients to the intervention group or control group using the Randomization Module in REDCap (Research Electronic Data Capture), which also housed our study data. Demographic characteristics and ethnicity were captured from electronic health records. Ethnicity was classified by patients during clinic registration with preset options; consideration of patient race/ethnicity was included to ensure the study population reflected our patient demographic characteristics and represented minority populations. Stratified block randomization was used to allot equal numbers of patients from each surgeon’s clinic to each treatment group. An independent project manager performed the random allocation sequence setup with guidance from institutional biostatisticians. Group assignment was revealed after consent and enrollment to research assistants, participants, and surgeons ahead of the consultation. Following randomization, all patients completed a baseline study survey capturing demographic characteristics, clinical factors, and process measures. The control group received the education module and usual care while those randomized to the intervention group received the education and preferences module before receiving the report from the outcomes module ( ). Patients reviewed the decision aid modules independently prior to consultations following a brief introduction by the researcher who periodically checked in with the patient and remained available to answer any questions. Patients received the personalized report at the same time as the surgeon. Both parties had a chance to review the report prior to the consultation, in which surgeons walked patients through the metrics as part of the discussion. We recorded consultation time by stopwatch, marking surgeon entry and exit from the patient room for the decision-making discussion. Patients completed a final set of questionnaires (K-DQI, CollaboRATE, and NRS) before leaving the clinic, and KOOS JR and treatment concordance were completed at the 4-to-6–month follow-up appointment. Statistical Analysis All primary and secondary outcome measures were examined for distributional properties; data were analyzed from April to May 2020 using Stata statistical software version 16 (StataCorp). Measures from the K-DQI, KOOS JR, and consultation duration were treated as continuous outcomes; NRS and total CollaboRATE scores as ordinal outcomes; and treatment concordance and TKR surgery as binary measures. In testing for intervention effectiveness, assumptions of unequal variance across both groups were checked for continuous measures and, if violated, Satterthwaite adjustments were reported for independent sample t tests (ie, K-DQI, consultation time). We fitted linear mixed effect models with random intercepts at the subject level to test the interaction of group assignment with time for KOOS JR. We conducted Mann-Whitney U tests to assess differences between groups for ordinal measures (ie, CollaboRATE, NRS) and performed Fisher exact tests to evaluate differing surgical rates and treatment concordance. To control for inflations of the type I errors rate, we set α = .05 for the primary outcome measures in 2-tailed tests and did the same for secondary outcome measures. The Hochberg-Y procedure was applied to correct individual test α levels in both groups of outcomes to maintain a familywise type I error rate of 0.05 for each. Power analysis indicated that 130 patients would yield 99% power to detect a comprehensive set of minimum meaningful group differences in K-DQI (15%) and CollaboRATE total scale (2 points), with 63% and 90% power to detect a 7-point or 9-point pre- to postintervention difference in KOOS JR scores, respectively, with α = .017 under a Bonferroni correction (0.05/3). These 3 measures were considered the most directly clinically relevant measures to SDM, decision aids, and patient outcomes in our study.
We performed a parallel randomized clinical trial with a 1:1 allocation between cohorts at a musculoskeletal integrated practice unit (IPU) in an academic center in the US serving a diverse population. Two orthopedic surgeons work with a coordinated multidisciplinary team—including an advanced practice health professional, physical therapist, behavioral therapy–trained social worker, and nutritionist—in a collocated outpatient facility. Services include structured exercise programs, imaging, joint injections, weight loss counseling, dietary advice, social support, smoking and alcohol cessation, behavioral therapies, pain management, and surgery where appropriate. PROMs are collected prior to or on arrival in clinic and at follow-up time points as a standard of care. The study was reviewed and approved by the University of Texas at Austin, Dell Medical School institutional review board, and verbal informed consent was obtained from participants. This study followed the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. The trial protocol appears in .
A total of 129 patients referred with presumptive knee OA and candidacy for primary TKR were recruited between March 2019 and January 2020. Study subjects were identified during the preclinic team meeting (ie, team huddle). Adult patients aged between 45 and 89 years, fluent in English or Spanish, with body mass index (BMI; calculated as weight in kilograms divided by height in meters squared) between 20 and 46, a primary diagnosis of advanced knee OA (radiographic Kellgren-Lawrence [KL] grade 3 or 4 where grade 0 represents no OA, grade 4 severe OA), baseline Knee Injury and Osteoarthritis Outcome Score for Joint Replacement (KOOS JR) between 0 and 85, able to provide informed consent, and medically fit for TKR were included. Age, BMI, and KOOS JR limits were set based on the estimation model. Exclusion criteria disqualified patients with knee problems not primarily related to OA (eg, trauma, inflammatory arthropathy), advanced OA affecting 1 or both knees with need for care of a different joint problem, prior OA management by an orthopedic specialist, and/or any prior lower extremity total joint replacement.
The intervention was an AI-enabled patient decision aid incorporating 3 modules (ie, education, preferences, and personalized outcomes) within a software platform ( ). The education module incorporated an overview of the natural history of OA, evidence-based nationally recognized treatment recommendations, , comparisons of treatment alternatives, risks and benefits, and a knowledge test developed using guidance from the surgeon authors (K.J.B., J.A., P.J.). The preferences module included ratings for desired levels of pain relief, commitment to postoperative recovery, and willingness to accept surgical risk on a continuum of nonoperative to operative care. Patients also rated whether they gained sufficient knowledge, their self-awareness of their preferences, and their confidence with the level of support received during decision-making. The outcomes module included a personalized report incorporating estimated probabilities of benefits, risks (complications), and likelihood of improvement in joint pain, stiffness, and QoL following TKR, alongside a summary of preferences and education modules. All content was available in English or Spanish, and the decision aid was deployed after a period of familiarization, testing within the clinical setting, and discussions about its fidelity between the clinical team and the company.
The primary outcome was the decision process score of the knee decision quality instrument (K-DQI) questions 3.1 through 3.5 (eFigure in ) ( ). Secondary outcomes included the level of SDM (assessed using the CollaboRATE survey), patient satisfaction with the consultation (numerical rating scale [NRS]), condition-specific symptoms and functional limitations (KOOS JR), duration of consultation in minutes, TKR rates (proportion of patients undergoing surgery), and treatment concordance (K-DQI question 1.6) (eFigure in ). All outcomes were assessed at the end of the clinical visit except KOOS JR and treatment concordance, which were assessed prior to the consultation and again at a follow-up appointment 4 to 6 months from initial consultation or date of TKR, as applicable. No changes to trial outcomes were made after study commencement.
All patients completed baseline PROMs after registration that included KOOS JR assessment, a Patient Reported Outcome Measurement Instrumentation System (PROMIS) Global-10 questionnaire, Generalized Anxiety Disorder screener (GAD-2) and full measure (GAD-7) as indicated, and the Patient Health Questionnaire screener (PHQ-2) and full measure (PHQ-9) as indicated. After escort to the clinic room, patients met the research assistant who provided study information and obtained verbal informed consent. We randomized eligible patients to the intervention group or control group using the Randomization Module in REDCap (Research Electronic Data Capture), which also housed our study data. Demographic characteristics and ethnicity were captured from electronic health records. Ethnicity was classified by patients during clinic registration with preset options; consideration of patient race/ethnicity was included to ensure the study population reflected our patient demographic characteristics and represented minority populations. Stratified block randomization was used to allot equal numbers of patients from each surgeon’s clinic to each treatment group. An independent project manager performed the random allocation sequence setup with guidance from institutional biostatisticians. Group assignment was revealed after consent and enrollment to research assistants, participants, and surgeons ahead of the consultation. Following randomization, all patients completed a baseline study survey capturing demographic characteristics, clinical factors, and process measures. The control group received the education module and usual care while those randomized to the intervention group received the education and preferences module before receiving the report from the outcomes module ( ). Patients reviewed the decision aid modules independently prior to consultations following a brief introduction by the researcher who periodically checked in with the patient and remained available to answer any questions. Patients received the personalized report at the same time as the surgeon. Both parties had a chance to review the report prior to the consultation, in which surgeons walked patients through the metrics as part of the discussion. We recorded consultation time by stopwatch, marking surgeon entry and exit from the patient room for the decision-making discussion. Patients completed a final set of questionnaires (K-DQI, CollaboRATE, and NRS) before leaving the clinic, and KOOS JR and treatment concordance were completed at the 4-to-6–month follow-up appointment.
All primary and secondary outcome measures were examined for distributional properties; data were analyzed from April to May 2020 using Stata statistical software version 16 (StataCorp). Measures from the K-DQI, KOOS JR, and consultation duration were treated as continuous outcomes; NRS and total CollaboRATE scores as ordinal outcomes; and treatment concordance and TKR surgery as binary measures. In testing for intervention effectiveness, assumptions of unequal variance across both groups were checked for continuous measures and, if violated, Satterthwaite adjustments were reported for independent sample t tests (ie, K-DQI, consultation time). We fitted linear mixed effect models with random intercepts at the subject level to test the interaction of group assignment with time for KOOS JR. We conducted Mann-Whitney U tests to assess differences between groups for ordinal measures (ie, CollaboRATE, NRS) and performed Fisher exact tests to evaluate differing surgical rates and treatment concordance. To control for inflations of the type I errors rate, we set α = .05 for the primary outcome measures in 2-tailed tests and did the same for secondary outcome measures. The Hochberg-Y procedure was applied to correct individual test α levels in both groups of outcomes to maintain a familywise type I error rate of 0.05 for each. Power analysis indicated that 130 patients would yield 99% power to detect a comprehensive set of minimum meaningful group differences in K-DQI (15%) and CollaboRATE total scale (2 points), with 63% and 90% power to detect a 7-point or 9-point pre- to postintervention difference in KOOS JR scores, respectively, with α = .017 under a Bonferroni correction (0.05/3). These 3 measures were considered the most directly clinically relevant measures to SDM, decision aids, and patient outcomes in our study.
Sixty-nine intervention group patients (46 [67%] women) and 60 control group patients (37 [62%] women) were included in the final analysis. provides descriptive statistics and counts for demographic and clinical characteristics. Participant Flow During the study period (March 2019 to July 2020, including follow-up), 705 new patients with joint pain presented to the IPU. A total of 560 patients (79.4%) were excluded on initial screening because of exclusion criteria (543 patients [77.0%]) or declining participation (17 patients [2.4%]) ( ). The remaining 145 (20.6%) were randomized into the control group (69 patients) or intervention group (76 patients). In the control group, 9 patients were excluded because of failure to complete postconsultation surveys or follow-up KOOS JR assessment, shifting care to hip or low back pain treatment, or study withdrawal. In the intervention group, 7 patients were excluded after failing to complete postconsultation surveys, transfer of care to another service, or not completing follow-up KOOS JR assessment. There were no harms or unintended effects in either group. Outcomes and Estimation provides descriptive statistics and counts for primary and secondary outcome measures. Patients in the intervention group showed better decisional quality (K-DQI, mean difference, 20.0%; SE, 3.0; 95% CI, 14.2%-26.1%; P < .0001). Ordinal ratings for level of SDM and satisfaction were highly skewed and consistent with improved outcomes in the intervention group using nonparametric tests. More control patients had scores lower than the CollaboRATE median (also the maximum on the measure) than in the intervention group (28 of 60 [47%] vs 8 of 69 [12%]; P < .001). Similarly, 19 of 58 patients (33%) had scores lower than the median value of 10 (also the maximum on the measure) for consultation satisfaction in the control group while 9 of 65 (14%) had scores lower than the median in the intervention group ( P = .01). Greater improvement in functional outcomes (KOOS JR) was shown from baseline to 4 to 6 months at follow-up (mean [SE], 4.9 [2.1] points higher in intervention group than control group; 95% CI, 0.8-9.0 points; P = .02). The intervention group did not experience longer consultation times (mean difference, 2.23 minutes; SE, 2.18). Finally, differential rates of TKR and differential treatment concordance did not reach statistical significance. To assess whether greater TKR rates in the intervention group relative to control group accounted for greater improvement in KOOS JR during the follow-up (although already shown not to be statistically significant), we adjusted for TKR effect in the linear mixed effects model. The relative improvement in KOOS JR remained statistically significant for those in the intervention group (mean [SE], 6.42 [2.31] points higher in intervention group than control group; 95% CI, 1.8-10.9 points; P = .02).
During the study period (March 2019 to July 2020, including follow-up), 705 new patients with joint pain presented to the IPU. A total of 560 patients (79.4%) were excluded on initial screening because of exclusion criteria (543 patients [77.0%]) or declining participation (17 patients [2.4%]) ( ). The remaining 145 (20.6%) were randomized into the control group (69 patients) or intervention group (76 patients). In the control group, 9 patients were excluded because of failure to complete postconsultation surveys or follow-up KOOS JR assessment, shifting care to hip or low back pain treatment, or study withdrawal. In the intervention group, 7 patients were excluded after failing to complete postconsultation surveys, transfer of care to another service, or not completing follow-up KOOS JR assessment. There were no harms or unintended effects in either group.
provides descriptive statistics and counts for primary and secondary outcome measures. Patients in the intervention group showed better decisional quality (K-DQI, mean difference, 20.0%; SE, 3.0; 95% CI, 14.2%-26.1%; P < .0001). Ordinal ratings for level of SDM and satisfaction were highly skewed and consistent with improved outcomes in the intervention group using nonparametric tests. More control patients had scores lower than the CollaboRATE median (also the maximum on the measure) than in the intervention group (28 of 60 [47%] vs 8 of 69 [12%]; P < .001). Similarly, 19 of 58 patients (33%) had scores lower than the median value of 10 (also the maximum on the measure) for consultation satisfaction in the control group while 9 of 65 (14%) had scores lower than the median in the intervention group ( P = .01). Greater improvement in functional outcomes (KOOS JR) was shown from baseline to 4 to 6 months at follow-up (mean [SE], 4.9 [2.1] points higher in intervention group than control group; 95% CI, 0.8-9.0 points; P = .02). The intervention group did not experience longer consultation times (mean difference, 2.23 minutes; SE, 2.18). Finally, differential rates of TKR and differential treatment concordance did not reach statistical significance. To assess whether greater TKR rates in the intervention group relative to control group accounted for greater improvement in KOOS JR during the follow-up (although already shown not to be statistically significant), we adjusted for TKR effect in the linear mixed effects model. The relative improvement in KOOS JR remained statistically significant for those in the intervention group (mean [SE], 6.42 [2.31] points higher in intervention group than control group; 95% CI, 1.8-10.9 points; P = .02).
TKR—among the most common surgical procedures in the US—is a high stakes encounter for advanced knee OA demanding careful selection. , , The promise of PROMs to improve SDM through enhanced patient engagement has been recognized. , , We developed an AI-enabled decision aid incorporating PROMs, patient education, preference assessment, and personalized estimations of clinical outcomes that improved decision quality, SDM, patient satisfaction, and functional outcomes compared with education and usual care. Our study joins others that have demonstrated improvements in decision quality, levels of SDM, and greater patient satisfaction in care for knee OA when using decision aids. , , , Our decision aid produced positive outcomes for individuals from a range of backgrounds, including those experiencing unemployment and those with limited resources. Thus, populations that have been shown to receive less support in making informed care decisions , , may also benefit from this tool. Notably, the relatively low number of patients declining participation indicates that diverse patient populations, including the underserved, also want to participate in their health decisions. Furthermore, both groups in this study had similar characteristics, although commercial insurance was slightly more common in the intervention group despite randomization. For all individuals with advanced knee OA, this decision aid may better address patient expectations around symptom relief, improvement in physical function, and psychosocial wellness, as well as provide clarity around fears, attitudes, and the risks and benefits of surgery. , , , , , Expectation management, patient engagement, and patient-perceived control over decisions synergistically yield optimal outcomes and experiences for those with OA. , , Decision aids addressing patient preferences may equip teams of health professionals to align with these expectations. Further, the ability of patients to appreciate multiple treatment options and the dynamic nature of their condition is supported by visual elements of our decision aid, including scales for preferences and quantified variability in outcomes based on modifiable personal and clinical factors. Patients using the tool may feel more involved, informed, and in control of the decision-making process; this warrants further study. , Notably, few decision aids have demonstrated the positive impact on functional outcomes in knee OA. , The findings in our study could be explained by greater patient engagement in those experiencing the full decision aid, which promoted improvements in their ability to perform physical activities. Improvement in PROMs is paramount as payers and policy makers aim to shift toward using patient-centered metrics in value-based payment reforms. Notably, mental health affects PROMs, with preoperative psychological distress affecting pain, function, and QoL following TKR. PROMIS Global mental health scores were incorporated in the algorithm, and raw scores were made available for enriching the surgeon-patient discussion. The combination of PROMs with AI analytics for decision support has also been observed in a 2020 study that demonstrated the feasibility of a web-based tool providing an estimated outcomes report using previsit PROMs plus clinical risk factors to facilitate SDM for patients with hip and knee OA. While large-scale implementation of this decision aid was achieved, inconsistencies remained around whether the report was reviewed during the decision-making consultation—in contrast to our trial, in which review by both patient and surgeon was mandatory in the intervention group. In another study, machine learning was shown to estimate postoperative PROMs using preoperative visual analogue scores for pain, Q scores, and clinical factors. Other studies have shown surgeons experience more efficient use of their time with shorter consultations using decision aids. Our study found no significant change in consultation time in either direction despite the inclusion of a joint review of the personalized outcomes report during the consultation. In relation to TKR use, some authors suggest that informed patients opt for more conservative treatments. , Indeed, reduced surgical rates have been demonstrated. , , However, we advise caution in relying on these tools to steer patients toward less invasive treatment options; our previous work exhibited no differential impact on TKR use between decision aid users and nonusers. Interestingly, while outside our target population, those with less severe OA may also benefit from experiencing similar analytics, given that in these cases decision aids are likely to reveal (and underline) less favorable outcomes from undergoing surgery. Advanced decision support may lead patients toward evidence-based nonoperative strategies and surgeons toward safe and judicious use of TKR. This may be a powerful asset as volume and expenditure for TKR increase. , , , , , , More informed patients with more realistic preoperative expectations are less likely to be dissatisfied with the results of their operations, and therefore may use fewer resources (eg, extended physical therapy or pain management) postoperatively. , Notably, the lack of significant impact on treatment concordance between our study groups may be explained by the standard of care in the IPU, where set treatment plans during initial consultations are usually followed through (ie, treatment performed matches treatment selected) irrespective of decision aids more focused on the decision-making process itself than the outcome of the decision. Lastly, the decision aid in this study incorporates complex analytics that require careful and clear explanations to patients. While no formal training was provided to surgeons, future implementation of such tools should incorporate coaching for clinicians to put the data generated in human terms and effectively communicate insights as treatment options. Decision aids have incorporated communication aids to enhance the health care professional–patient interaction, and this tactic could be applied to our tool. Widespread uptake of such innovations will ultimately depend on surgeons attesting to data insights supporting their deductive reasoning and judgment during SDM, data flow automation (to minimize user burden), robust longitudinal PROM collection, coaching on SDM methods, and communication of outputs alongside legal and ethical considerations. , , , , Addressing these factors may accelerate the successful integration of AI-enabled decision aids using PROMs and drive uptake of these tools to unlock advanced insights as the machine learns from an accumulating set of clinical- and patient-focused data points to provide reliable, real-time estimations of outcomes. Limitations There are several limitations to this study. First, this work was performed in a specialized setting at a single institution delivering a comprehensive range of treatments. While this potentially limits generalizability, the study does demonstrate the feasibility of using this type of decision aid in any setting where PROMs are collected longitudinally. Future evaluations should account for nonoperative strategies delivered and assess outcomes in different settings, including traditional fee-for-service care. Second, because surgeons were not masked to the intervention, there is potential for contamination (ie, differential surgeon behavior expressed toward each group, such as motivational bias manifest in enhanced interactions with those in the intervention group). Such biases were challenging to accommodate in our study design. Third, we did not assess the effect of the decision aid on patient knowledge, because matching educational content was provided to both groups nor did we assess patient activation (defined as propensity to engage in adaptive health behaviors), choice awareness, aspects of deliberation (eg, decision conflict and decision regret), expectation management, or surgeon perceptions (eg, efficiency of the consultation). While research assistants were available to answer any questions patients had while they read the education material and used the decision aid, we did not formally assess health literacy, and some patients may not have been able to read or understand all content in the decision aid. Future iterations of the decision aid may also include audio or video options instead of having to read the material. Further work should explore these factors while minimizing questionnaire burden and account for health literacy, language barriers, and sociodemographic status. Fourth, the typical course of a formal OA in-clinic diagnosis poses a general limitation in limiting the timeframe over which the tool may be applied.
There are several limitations to this study. First, this work was performed in a specialized setting at a single institution delivering a comprehensive range of treatments. While this potentially limits generalizability, the study does demonstrate the feasibility of using this type of decision aid in any setting where PROMs are collected longitudinally. Future evaluations should account for nonoperative strategies delivered and assess outcomes in different settings, including traditional fee-for-service care. Second, because surgeons were not masked to the intervention, there is potential for contamination (ie, differential surgeon behavior expressed toward each group, such as motivational bias manifest in enhanced interactions with those in the intervention group). Such biases were challenging to accommodate in our study design. Third, we did not assess the effect of the decision aid on patient knowledge, because matching educational content was provided to both groups nor did we assess patient activation (defined as propensity to engage in adaptive health behaviors), choice awareness, aspects of deliberation (eg, decision conflict and decision regret), expectation management, or surgeon perceptions (eg, efficiency of the consultation). While research assistants were available to answer any questions patients had while they read the education material and used the decision aid, we did not formally assess health literacy, and some patients may not have been able to read or understand all content in the decision aid. Future iterations of the decision aid may also include audio or video options instead of having to read the material. Further work should explore these factors while minimizing questionnaire burden and account for health literacy, language barriers, and sociodemographic status. Fourth, the typical course of a formal OA in-clinic diagnosis poses a general limitation in limiting the timeframe over which the tool may be applied.
The findings of this study suggest that multifaceted decision aids integrating patient education, preference assessment, and AI-enabled analytics built with PROM data can provide a personalized, data-driven approach to SDM for patients with advanced knee OA considering TKR. Benefits to patients observed in this single-center study warrant further investigation across multiple sites routinely collecting PROMs, with careful consideration of institutional and health care professionals’ experiences of incorporating AI in practice. The patient-centered, data-driven approach to SDM in this study may mark a step-change in the application of patient decision aids in orthopedic practice.
|
Recent Advances on Peptide-Based Biosensors and Electronic Noses for Foodborne Pathogen Detection | a1ec81ba-4091-44a4-8849-68f28618a288 | 9954637 | Microbiology[mh] | A small group of less than ten microbes is responsible for causing humans millions of diseases around the globe every year. We ingest these foodborne pathogens when consuming contaminated water and food of all kinds, including seafood, poultry, dairy, fruits and vegetables. For most healthy adults, foodborne illnesses are not life threatening. However, complications may arise and result in serious conditions, such as septicemia, spontaneous abortion, stillbirth and death . The World Health Organization (WHO) estimates that foodborne illnesses affect 600 million people and cause almost half a million deaths around the world every year, with children under 5 years old accounting for around 40% of them . The most vulnerable populations are young children, pregnant women, immunocompromised patients and the elderly. Severe forms of these diseases may also occur due to the antibiotic resistance of pathogenic microorganisms, a problem mainly caused by the overexploitation of antibiotics in the medical, agriculture and food industries . Foodborne pathogen contamination pathways include the contact of foodstuffs with water, sewage, air and soil during harvesting, processing and packaging. Bacteria, viruses, molds, worms, parasites and prions can be foodborne pathogens. Bacteria, however, cause the highest number of foodborne illnesses by far . Although outbreaks vary regionally and affect countries of all incomes, least developed and developing countries are the most vulnerable. Africa, the Americas and the Eastern Mediterranean Region suffer the highest number of infections due to foodborne diseases per population mostly due to Campylobacter , Salmonella , Taenia solium and norovirus . As for developed countries, in 2020, the European Food Safety Authority (EFSA) reported 3086 foodborne outbreaks, mostly being campylobacteriosis, salmonellosis and norovirus infections . In Europe, annual monitoring is compulsory for eight zoonotic agents, of which Salmonella , Campylobacter , Listeria and Shiga toxin-producing Escherichia coli lead as the top pathogens . Similarly, the U.S. identified Salmonella , Toxoplasma, Listeria , norovirus and Campylobacter as the top five foodborne pathogens in 2018 . It is quite surprising that such a small set of pathogenic microorganisms could be responsible for millions of diseases worldwide. A factor that contributes to this annual reoccurrence is the inadequate reporting of outbreaks and their causes. For this reason, the WHO has emphasized the importance of identifying the most common foodborne pathogens by region, so as to generate targeted actions by regulatory bodies in the food industry . summarizes the classification and characteristics of the world’s top foodborne pathogens, current detection methodologies and regulatory limits in foodstuffs for the European Union. Effective monitoring systems allow for their earlier detection, which prevents the loss of human life and lowers these diseases’ economic burdens, including costs of medical care, lost productivity and premature death related to foodborne illnesses. The U.S. alone estimated a 15.5 billion USD economic burden for the year 2018, highlighting that preventing foodborne diseases has become more economically valuable relative to other goods and services . However, there continues to be an important gap between industrial needs, regulatory policies and existing detection technology. This disparity can be exemplified with the European Union’s mandatory screening of broiler carcasses for Campylobacter spp. since 2017, which has yet to be implemented, as a methodology that can fulfill detection demands does not yet exist. Another example is norovirus detection in the U.S., for which only one test, the RIDASCREEN Norovirus enzyme-linked immunosorbent assay (ELISA), is currently approved for diagnosis, but its use is authorized exclusively for outbreak settings due to its lack of sensitivity . Thus, strategies for quality control improvement in foodstuffs must not be limited to the establishment of more strict measures and rigorous monitoring but should also take into account the development of detection systems that can reach the limits of detection needed in the industry through high sensitivity, cost-effectiveness and the feasibility of implementation. The gold standard for foodborne pathogen detection is based on culture isolation in selective media coupled with serotyping, immunoassays or molecular biology methodologies for the identification of the specific species or strains . Usually, the preliminary results are based on culturing on selective media, which is composed of the necessary nutrients for bacterial growth, as well as additional selective agents with the purpose of isolating a particular species or genus. This process takes 48 to 72 h, and in some cases, may require a pre-enrichment step of up to 48 h. Some bacteria, such as Salmonella , require serotyping, in which bacterial isolates are presented to antisera to identify characteristic antigens of different Salmonella serovars. However, this requires more than 150 specific antisera and highly trained microbiologists to interpret the results . On the contrary, immunoassays take advantage of antibody–antigen specific interactions to measure the concentration of the pathogen in the sample. Although highly sensitive, these are not able to discriminate between live and dead bacteria, may be prone to false positives and negatives and are susceptible to cross-reactivity . Molecular biology techniques, on the other hand, focus on the recognition and exponential amplification of short nucleic acid fragments specific to a target. One of the most widely used techniques is polymerase chain reaction (PCR), which is compatible with multiplex detection by the use of additional specific primers. Although highly sensitive and specific, this technique is susceptible to inhibitors and needs specialized equipment. Other DNA amplification strategies performed at a constant temperature have been developed to circumvent the use of a thermocycler, such as loop-mediated isothermal amplification (LAMP). However, this technique is extremely sensitive and thus susceptible to contamination that could lead to false-positive results . Although most classical methodologies are accurate and reliable, they can be expensive and require specialized equipment and personnel. Furthermore, the current monitoring processes are lengthy, requiring up to one week for species confirmation. Thus, their implementation time frame is not compatible with the preventive approach that legislative regulation often aims for. In an effort to address these shortcomings, recent years have seen a clear peak in the development of systems for the detection, discrimination and identification of pathogenic microorganisms in a rapid manner and in accordance with regulations. shows articles published for “foodborne pathogen detection” of the most used sensing methodologies, along with future trends according to their publication rate in the last twenty years. Many works have focused on the improvement of already implemented approaches, such as PCR and ELISA, while emerging technologies such as LAMP have only recently gained interest. However, the fastest-growing research field is biosensors. Furthermore, the rate of publication in each field over the last 20 years was projected onto the next four years, and, once again, it seems that the biosensor field’s exponential growth will dominate pathogen detection research. The main objectives in the biosensing field are the development of highly sensitive, low-cost, rapid, portable devices that are compatible for on-site testing and have the same or better performance than the currently implemented techniques. Indeed, the WHO has published international guidelines for new diagnostic tools known as REASSURED (Real-time connectivity; Ease of specimen collection; Affordability; Sensitivity; Specificity; User-friendliness; Rapid & robust operation; Equipment-free; and Deliverability) . As for the targeted microorganism, shows published articles in biosensors for foodborne pathogen detection according to the targeted microorganism from 2002 to 2022, as well as the total percentage of the transduction technique employed. Various types of biosensors have been developed by focusing on the detection of Escherichia coli and Salmonella , as they represent a heavy burden to the clinical and food safety domains. Nevertheless, there has been a disproportionate focus on the detection of other pathogens, such as norovirus, Campylobacter and Listeria , which have been responsible for an even larger number of illnesses in recent years and, in some cases, are more likely to be deadly . However, shows that the food safety field continues to expand as research steers towards targeting a wider range of foodborne pathogens, such as Staphylococcus , Pseudomonas aeruginosa and Bacillus , included in the “other” category. Among different types of transduction systems used, electrochemical (impedimetric, amperometric and potentiometric) biosensors are the most widely employed, making up almost 45% of the biosensors found in the literature search. Next up, optical biosensors, including fluorescence, colorimetric and surface plasmon resonance (SPR)-based platforms, make up about 41%, and the remaining 14% are mass-based biosensors such as quartz crystal microbalance (QCM), surface acoustic wave (SAW) and nanomechanical systems. Different bioreceptors were used to construct these biosensors, including antibodies , nucleic acids , aptamers , peptides [ , , , ] and bacteriophages . The choice of bioreceptor is paramount to achieve reliable detection with high sensitivity and specificity. Antibodies were the logical first choice for the development of sensitive detection instruments, having extremely high specificity and affinity for their target . However, their production is laborious, time-consuming and expensive. Additionally, they are more sensitive to unfavorable environmental conditions and can lose stability and specificity for their target under certain complex conditions. Nucleic acids, such as DNA and RNA, have been explored due to their ability to recognize microorganisms based on a specific genetic sequence. As a notable example, real-time PCR (RT-PCR) has quantitative and qualitative characteristics and has been previously used to evaluate various foodborne pathogens . The main drawbacks of nucleic acid-based methodologies are the use of specialized equipment or personnel, high cost, most of these techniques’ incompatibility with on-site detection or to distinguish between live and dead bacteria and their dependence on the DNA polymerase enzyme. Indeed, even the presence of low amounts of ions and molecules from food matrices in extracted nucleic acids may inhibit DNA polymerase and prevent amplification. Aptamers are short, single-stranded nucleic acid sequences that have high binding affinities to different targets and are able to adopt specific three-dimensional sequence-dependent conformations. They are identified and selected through the systematic evolution of ligands by exponential enrichment (SELEX) methodology according to their ability to bind to a specific target . After undergoing three-dimensional folding, a binding site is created. Aptamer–target interactions are dependent on the complementarity of their shapes, rather than the genetic sequence. Nevertheless, this binding event is capable of reaching levels of specificity comparable to those of antibodies. Some of their advantages include their stability, easy synthesis, high yield of production and the possibility to form multiple combinations of nucleic base “building blocks”, which allows for the creation of multiple candidates that can be screened against a target . Peptides have gained interest in the biosensing field thanks to their unique features, such as good biocompatibility, high stability, ease of synthesis and sequence versatility. Indeed, compared to antibodies, peptides are more resistant to harsh conditions, such as high temperatures or wide pH ranges, required for on-field applications . Today, there are various biological and chemical techniques for the rapid screening of peptide libraries, and their synthesis is simpler and cost lower compared to other biomolecules used in biosensors, such as antibodies or nucleic acids . Furthermore, natural and synthetic peptides may contain D-amino acids, which are enantiomers of L-amino acids and have been considered as non-natural amino acids for a long time . Interestingly, almost all bacteria contain D-amino acids, such as D-alanine and D-glutamate, in their cell envelopes. Peptides carrying D-amino acids bind efficiently to bacterial cells through the incorporation of their D-amino acids into the bacterial cell wall, as demonstrated for Bacillus subtilis . Besides the aforementioned biomolecules, bacteriophages (phages) have also been used as bioreceptors. Phages are viruses that specifically recognize their bacterial hosts in order to infect them and replicate. They may be lytic or nonlytic, depending on whether they lyse the bacterial membrane after replication or not, and may be used for bacterial quantification assays, done by measuring the adenosine triphosphate (ATP) concentration through bioluminescence or other bacteria cytoplasmic markers that are quantifiable after membrane disruption. Furthermore, phages can be easily implemented in electrochemical sensors, as the disruption of the membrane upon bacterial binding causes a drop in conductivity , an easily measurable event. The production of phage clones with identical genetic sequences is easy and inexpensive, as host infection results in their replication into thousands of copies. Additionally, phage probes can differentiate between live and dead bacteria, withstand harsh conditions, such as wide ranges of pH and temperature, and may also be used as signal amplifiers. Every step of the phage infection cycle has been exploited for detection techniques, but only a few of these have been developed into commercial products, as they have yet to prove a significant advantage in rapidity, sensitivity or specificity over existing techniques . Although the use of phage clones ensures better repeatability due to the robustness of the particle itself, the effective immobilization of whole bacteriophages onto substrates is a crucial step that might prove difficult, as there are multiple possibilities with differing optimal conditions, depending on the orientation, the surface and the type of immobilization chosen . So far, in the literature, the use of antibodies and aptamers in the foodborne pathogen detection field has been extensively reviewed [ , , , ], in contrast to peptide-based biosensors. Therefore, in this review, we will focus on the sensitive and selective biosensors developed using peptides as bioreceptors for the detection of the most prevalent foodborne pathogens. Moreover, we will present an overview on other emerging peptide-based sensor techniques, such as electronic noses (eN), for foodborne pathogen detection.
Peptides are chains of covalently linked amino acids. There are 20 natural L-amino acids, all consisting of the same framework and differing side functional groups that confer them different physicochemical characteristics. Upon interactions with each other, they may acquire a specific spatial conformation. A peptide containing n amino acids may be arranged in 20 n possible ways. Thus, the combination of this relatively small set of different building blocks results in an enormous diversity in structure and biological activity. This seemingly endless possibility of combinations makes peptides particularly attractive as bioreceptors. Through designing an amino acid sequence, one is able to obtain whatever physicochemical and structural characteristics are required for the detection of a given target or for a specific application. Hydrophobicity, polarity, length and even rigidity can be modulated quite easily by adjusting its amino acid constituents, as well as the enhancement of its selectivity and specificity to a given target . The choice of amino acid sequence in peptides for biosensor probes is anything but arbitrary. Numerous methodologies have been developed to find the precise conformation that will result in the required selectivity and specificity towards a target—in this case, a foodborne pathogen. Some of the most notable peptide selection strategies, shown in , are the isolation and purification of natural antimicrobial peptides (AMPs) from living organisms, screening short-peptide libraries using genetically engineered bacteriophages with a phage display, rational in silico designs and protein-derived approaches. 2.1. Antimicrobial Peptides Antimicrobial peptides are naturally occurring molecules present in virtually all living organisms as a line of defense against the various pathogenic microbes to which we are constantly exposed. Contrary to the mechanism of action of antibodies comprising the adaptive immune system, AMPs target microbes without specificity. This broad-spectrum antimicrobial activity is accomplished by targeting the negatively charged motives in the bacterial envelope, such as the phospholipid head groups of bacterial membranes, or some oligosaccharides in the cell wall, absent in eukaryotes . AMPs are particularly important to those organisms that cannot afford the biological cost of having an adaptive immune system, such as insects, invertebrates and even bacteria themselves. A nomenclature for AMPs has not been standardized yet, so their classification is mainly based on their organism of origin noted with the –cin suffix , i.e., human defensins, bactericins, etc. The first records of antimicrobial activity identified in a substance derived from a living organism date back to the 1920s, with Alexander Fleming’s discovery of bacterial lysis upon contact with nasal secretions from patients. He named the protein responsible for this phenomenon “lysozyme” . Further interest in antimicrobial substances arose in the 1980s, when three peptides identified and purified from the giant silk moth Hyalophora cecropia were shown to have bacteriolytic activity against Escherichia coli . Ever since, a plethora of unique AMPs have been identified from all sorts of living organisms, such as Magainin I, originally isolated from the skin of the African clawed frog Xenopus laevis in 1987 , or Clavanin A, purified from the tunicate Styela clava . To this day, more than a thousand different AMPs have been identified, and a partial list can be found in the database APD3 ( https://aps.unmc.edu , accessed on 30 December 2022) . Although the exact way in which these molecules disrupt the bacterial membrane remains unknown, a few hypotheses have been formulated regarding AMPs’ mechanism of action. These include adsorption to proteins and lipids from the membrane surface, nonlytic depolarization, solubilization of the membrane into micellar structures , the disruption of the osmotic regulation of the target bacteria and the ability to hijack biological processes crucial to bacterial survival, such as DNA and protein synthesis. Amongst the vast diversity of isolated AMPs, a common physicochemical feature stands out: an amphipathic conformation consisting of a cationic polar portion and a hydrophobic domain. This duality permits an initial interaction of the net positive charge fragment of the peptide with the negatively charged bacterial membrane, followed by the insertion of the peptide into the membrane, mediated by hydrophobic interactions . It has been hypothesized that the peptide’s secondary structure plays an important role in this process, as helical structures and beta sheets may be able to present a continuous hydrophobic surface, advantageous for peptide–bacteria interactions. This mechanism of action is instrumental to the fact that bacterial adaptive resistance to AMPs is rare, as, to circumvent AMPs, bacteria would have to modify their membrane, which constitutes a large proportion of their total composition. However, when the targets are small proteins within the cell, which is often the case for antibiotics, this genetic modification might be much easier to perform, resulting in an easier acquired resistance . For this reason, AMPs have been explored as alternatives to antibiotic treatment, especially when dealing with multidrug resistance organisms . AMPs lack the specificity of monoclonal antibodies but are nonetheless exceptional at recognizing and selectively interacting with bacteria . Further selectivity is achieved by targeting specific lipopolysaccharide (LPS) compositions, as they are highly variable between genera, species and even strains, differing by the number and structure of repeating oligosaccharide units . In the biosensor field, AMPs’ binding capabilities make them excellent probe candidates for the development of highly sensitive multiplexed arrays, especially those in which the main priority is the confirmation of total sterility or the presence of pathogenic bacteria, rather than the identification of a specific species. includes a list of AMPs that have been incorporated into biosensors for the detection of foodborne pathogens. Other areas of application of AMPs in the food industry, such as food preservation, the development of antimicrobial packaging and the formulation of antibiofilm sanitizing products, have been extensively investigated in outstanding reviews [ , , ]. Synthetic antimicrobial peptides have also been thoroughly explored in recent years, mostly with a focus on clinical treatments due to their ability to kill antibiotic-resistant pathogens and because they rarely trigger resistance mechanisms in microorganisms . Although the mechanism of action of AMPs has not yet been fully elucidated, novel antimicrobial peptides can be designed based on their known defining characteristics, such as their short length, the formation of helices and β-sheets as secondary structures, their cationic net charge and their amphipathic nature. In the literature, some research groups improved specific peptides for pathogen detection by rational design. To do so, first, AMP sequences were isolated from living organisms, which generally resulted in the elucidation of the structure and design requirements for the synthetic construction of novel AMPs. Then, small mutations, such as deletions of a single amino acid, were made to increase their net charge, improving their selectivity towards bacterial pathogens . Rational design may also be used to confer specificity to an existing partially selective antimicrobial peptide through the addition of a species-specific targeting domain to a broad-spectrum AMP. For example, Eckert et al. screened a set of short (8 to 12 amino acids) fluorescently labeled peptides with varying physicochemical characteristics against Pseudomonas spp. to find the strongest binding candidates. Out of those, a single peptide exhibiting selectivity against Pseudomonas (KKHRKHRKHRKH) was identified. They then introduced a linker sequence (GGSGGS) to incorporate the selective short peptide onto the C-terminal of Novispirin (KNLRRIIRKGIHIIKKYG), an AMP known to have lytic activity against a wide range of bacteria. The resulting chimeric peptide was able to selectively retard Pseudomonas growth and leave E. coli and Streptococcus mutans virtually unaffected . 2.2. Peptides Screened by Phage Display Phage display is a Nobel Prize-winning in vitro technique that selects phage clones expressing the peptides with the highest affinity to a specific target from an initial pool of candidates . The process closely resembles evolution, as it relies on the fabrication of billions of “candidates” through DNA recombination in viral particles and the selection of the fittest through consecutive cycles of high-affinity purification. The application of phage-displayed probes for biosensors in various fields has been recently discussed in several reviews . In order to find a peptide with high affinity to a selected target, an initial “library” is generated by inserting random foreign DNA sequences into bacteriophages, which, in turn, express the corresponding peptide on their outside coating. This will result in a heterogeneous mix of around a billion phage clones bearing different foreign peptides. The most commonly used type of phage for this technique is the M13 bacteriophage, which possesses five different coat proteins: pIII, pVI, pVII and pXI present in five copies each, and pVIII presents in around 2700 copies all along its filament . In general, phages express foreign peptides with between 7 and 15 amino acids, depending on the library. Phages expressing foreign peptides on the pVIII coat protein, such as those from the f8/8 library, express thousands of foreign peptides in a compact, reiterating pattern over the whole length of the phage capsid . These are also called “landscape phages”, as this multivalent display of the foreign peptide may result in the activity of not only the single peptide and its immediate surroundings but also global functions of the entire surface “landscape” . By contrast, a monovalent peptide display refers to its expression solely on pIII, present in five copies at one end of the filament. Monovalent display libraries are the most commonly used for bacterial targets. The classical phage display protocol consists of six main steps : First, the incubation of the initial phage library takes place over a surface on which the targets have been previously immobilized. Upon incubation, the phages with higher affinity to the target are attached, and those with low or no affinity remain unbound in the solution. The subsequent washing and elution steps with increasing levels of stringency apply selective pressure, the former to remove unbound phages and the latter to release bound phages with a higher affinity for the target analyte. Both steps must be performed without losing the virion’s infectivity for the next step to take place. Afterwards, E. coli host cells are added, and the process of amplification begins as the phages infect host cells for replication and, consequently, thousands more copies of those phages expressing the fittest peptides are produced. Finally, after three or four selection cycles (steps 1–4), the amino acid sequences of the few selected peptides with the highest affinity for the target are determined by sequencing the genetic material of the phage. They can then be synthesized and used as bioreceptors for the development of biosensors. Phage displayed peptides have been used for foodborne pathogen detection because of their remarkable selectivity towards a specific species or strain, contrary to the wide-spectrum binding of other peptides. shows a list of peptides selected by phage display with a foodborne pathogen as the target. The phage display approach yields the consensus sequences of the best binding candidates under a specific set of experimental conditions. However, different sequences may be found upon making slight changes . Thus, there are a few challenging aspects to take into account for a successful experimental design. For example, bacterial membranes are complex mixtures of proteins and phospholipids and express different antigens on their surfaces , which makes experiments more challenging than when dealing with simpler and smaller targets. Furthermore, the surface epitopes bacteria expressed when immobilized may be different to those expressed in the solution and different still to those expressed when in natural environments or food matrices. In this case, it is not surprising that the peptides obtained from immobilized bacteria would be different to those found when that same target is suspended in a buffer solution. However, a conclusion has not been reached on whether one strategy is inherently better than the other. Sorokulova et al. compared various methodologies when biopanning the f8/8 phage against the target Salmonella enterica Typhimurium, presenting it both in the solution and surface-immobilized for comparison. They found the peptide with the highest specificity when using immobilized bacteria . On the contrary, McIvor et al. tested a comparison between the two methodologies and found the opposite: for L. monocytogenes , the clones with specific binding to the serovar were found exclusively when working with bacterial suspensions in Phosphate-Buffered Saline (PBS) buffer . During a phage display experiment, non-target-specific peptides may be enriched inadvertently . Peptides with specificity to either the immobilization surface , the blocking solution, the capture molecule (biotin or streptavidin) or support of the target are called “target-unrelated peptides” (TUPs) . In order to avoid them, “subtractive biopanning” is advisable—that is, incubating the initial library onto the biopanning infrastructure containing all elements except the target to eliminate all potential TUPs. The most comprehensive list to date of TUP amino acid sequences reported in the literature is the “Scanner And Reporter Of Target-Unrelated Peptides” (SAROTUP) . Subtractive biopanning may also be used to find a peptide with higher selectivity. In this case, negative selection can be done by incubating the library with another microorganism similar to the target in a Gram stain or species in order to deplete it of those clones binding to motifs common to several bacteria and eliminating the possibility of cross-reactivity later on. Rao et al. used this strategy when biopanning against Staphylococcus aureus , performing pre-adsorptions against E. coli to subtract clones binding to Gram-negative bacteria and subsequently against Staphylococcus epidermis to eliminate those binding to the Staphylococcus genus . In contrast, McIvor et al. opted to perform negative selection in the last biopanning round, incubating peptides with Listeria innocua from the same genus as the target, L . monocytogenes to find peptides with higher specificity towards the latter . These data show that the biopanning step is crucial to obtain the most selective sequences, and there is not one experimental solution but a conjugation of selection steps that can help to obtain target-specific sequences. Interestingly, it is precisely the limitations of a blind, unbiased experiment that can be used to our advantage. Phage display has recently gained interest for finding new specific receptors that could identify surface motifs or structures that may not have been previously identified by other approaches . Additionally, an exceptional advantage of phage display is the fact that it bypasses the requirement for target immunogenicity, one of the main limitations of in vivo antibody production . Furthermore, a bacterial strain may not express surface epitopes that are both unique and antigenic, which would render the produced antibodies incapable of discriminating between strains of the same genus . McIvor et al. reported a notable example when testing commercial antibody-coated beads that were not able to differentiate between L. monocytogenes and L. innocua . They hypothesized that, in Listeria , the immunodominant epitopes may be shared between species, making it difficult for antibody probes to identify the less immunogenic but crucially specific epitopes that differentiate serovars . Additionally, the accurate detection of highly mutagenic foodborne pathogens, such as Norovirus, would likely need new specific probes to be developed every few years. In this case, the ease of adaptability and low cost of the phage display methodology would make it a much more attractive approach than the cost and laboriousness of constant antibody development . It is important to note that the categories of peptides mentioned in this review are not mutually exclusive, and some phage-displayed peptides may meet the same structural and net charge criteria as AMPs of living organisms, thus exhibiting antimicrobial activity against bacteria . This review focuses specifically on biosensing platforms using peptides as detection probes; however, either the synthetized specific peptide or the entire phage may be incorporated in biosensors. The use of phages displaying specific peptides as sensing elements has been thoroughly reviewed in previously published reviews [ , , , ]. 2.3. In Silico Design of Peptides Another strategy for peptide rational design is based on in silico tools. Indeed, in the last two decades, molecular simulations have shown to be a powerful theoretical technique to study peptide structures and dynamics [ , , ]. There are three main approaches for designing antimicrobial peptides: the modification of known AMP sequences, biophysical modeling and virtual screening . First, the sequence modification approach consists of using known AMP sequences as templates and subsequently modifying of one or more amino acids to identify the most crucial amino acids and their positions for antimicrobial activity or to elucidate the role of certain motifs present in the peptide on its overall mechanism of action . Wiradharma et al. designed short AMPs by using repeats of hydrophobic and cationic residues known to confer antimicrobial activity. In this way, they found peptides with increased antimicrobial activity against Gram-positive bacteria and selectivity towards microbial cells . Likewise, chimeric peptides with increased selectivity can be designed by combining motifs of wide-spectrum AMPs with targeted ones . Second, biophysical modeling relies on the design of AMPs based on structural motifs and their properties, accounting for their interactions with the bacterial membrane and the media around them. In this case, the use of molecular dynamics simulations can lead to the improvement of antimicrobial activity. Third, virtual screening approaches are used to explore sequence iterations that may prove too difficult to test using other screening techniques. These include the use of bioinformatics tools, such as machine learning methods, evolutionary algorithms and stochastic approaches. The development of online software capable of predicting AMPs derived from a given protein, such as the “Antibacterial peptides” (AntiBP) and the “Collection of Antimicrobial Peptides” (CAMP) servers, have facilitated the design of new peptides by using Quantitative Matrices, Artificial Neural Networks and Support Vector Machines. Recently, Yang et al. predicted, designed and validated an AMP derived from the sequence of the small subunit of Penaeus vannamei hemocyanin (PvHS) using these servers. Two out of the twelve predicted peptides showed strong antimicrobial activity on Gram-negative and Gram-positive bacteria . Subsequently, the team synthetized them and performed a structural analysis revealing a β-sheet structure, and scanning electron microscopy confirmed the peptide’s ability to disrupt the bacterial membrane. 2.4. Protein-Derived Peptides Furthermore, a library of short peptides may be produced and screened using larger proteins or enzymes with specific activity as the starting templates. For example, Palmieri et al. combined in silico predictions and docking simulations to design short peptides from the protein CPT-1A (carnitine palmitoyl transferase 1a), predicting advantageous mutations that would confer increased antimicrobial activity to candidate peptides. In this way, the team found two peptides with antimicrobial activity against L. monocytogenes . In another recent example, Mardirossian et al. designed short peptide fragments from the larger 25 amino acid peptide Bac5, a proline-rich AMP, for their use as antibiotics and tested their activity against E. coli , S. aureus , P. aeruginosa and other bacteria. They found the minimum length required for mammalian AMPs to keep their antimicrobial activity is 17 amino acids .
Antimicrobial peptides are naturally occurring molecules present in virtually all living organisms as a line of defense against the various pathogenic microbes to which we are constantly exposed. Contrary to the mechanism of action of antibodies comprising the adaptive immune system, AMPs target microbes without specificity. This broad-spectrum antimicrobial activity is accomplished by targeting the negatively charged motives in the bacterial envelope, such as the phospholipid head groups of bacterial membranes, or some oligosaccharides in the cell wall, absent in eukaryotes . AMPs are particularly important to those organisms that cannot afford the biological cost of having an adaptive immune system, such as insects, invertebrates and even bacteria themselves. A nomenclature for AMPs has not been standardized yet, so their classification is mainly based on their organism of origin noted with the –cin suffix , i.e., human defensins, bactericins, etc. The first records of antimicrobial activity identified in a substance derived from a living organism date back to the 1920s, with Alexander Fleming’s discovery of bacterial lysis upon contact with nasal secretions from patients. He named the protein responsible for this phenomenon “lysozyme” . Further interest in antimicrobial substances arose in the 1980s, when three peptides identified and purified from the giant silk moth Hyalophora cecropia were shown to have bacteriolytic activity against Escherichia coli . Ever since, a plethora of unique AMPs have been identified from all sorts of living organisms, such as Magainin I, originally isolated from the skin of the African clawed frog Xenopus laevis in 1987 , or Clavanin A, purified from the tunicate Styela clava . To this day, more than a thousand different AMPs have been identified, and a partial list can be found in the database APD3 ( https://aps.unmc.edu , accessed on 30 December 2022) . Although the exact way in which these molecules disrupt the bacterial membrane remains unknown, a few hypotheses have been formulated regarding AMPs’ mechanism of action. These include adsorption to proteins and lipids from the membrane surface, nonlytic depolarization, solubilization of the membrane into micellar structures , the disruption of the osmotic regulation of the target bacteria and the ability to hijack biological processes crucial to bacterial survival, such as DNA and protein synthesis. Amongst the vast diversity of isolated AMPs, a common physicochemical feature stands out: an amphipathic conformation consisting of a cationic polar portion and a hydrophobic domain. This duality permits an initial interaction of the net positive charge fragment of the peptide with the negatively charged bacterial membrane, followed by the insertion of the peptide into the membrane, mediated by hydrophobic interactions . It has been hypothesized that the peptide’s secondary structure plays an important role in this process, as helical structures and beta sheets may be able to present a continuous hydrophobic surface, advantageous for peptide–bacteria interactions. This mechanism of action is instrumental to the fact that bacterial adaptive resistance to AMPs is rare, as, to circumvent AMPs, bacteria would have to modify their membrane, which constitutes a large proportion of their total composition. However, when the targets are small proteins within the cell, which is often the case for antibiotics, this genetic modification might be much easier to perform, resulting in an easier acquired resistance . For this reason, AMPs have been explored as alternatives to antibiotic treatment, especially when dealing with multidrug resistance organisms . AMPs lack the specificity of monoclonal antibodies but are nonetheless exceptional at recognizing and selectively interacting with bacteria . Further selectivity is achieved by targeting specific lipopolysaccharide (LPS) compositions, as they are highly variable between genera, species and even strains, differing by the number and structure of repeating oligosaccharide units . In the biosensor field, AMPs’ binding capabilities make them excellent probe candidates for the development of highly sensitive multiplexed arrays, especially those in which the main priority is the confirmation of total sterility or the presence of pathogenic bacteria, rather than the identification of a specific species. includes a list of AMPs that have been incorporated into biosensors for the detection of foodborne pathogens. Other areas of application of AMPs in the food industry, such as food preservation, the development of antimicrobial packaging and the formulation of antibiofilm sanitizing products, have been extensively investigated in outstanding reviews [ , , ]. Synthetic antimicrobial peptides have also been thoroughly explored in recent years, mostly with a focus on clinical treatments due to their ability to kill antibiotic-resistant pathogens and because they rarely trigger resistance mechanisms in microorganisms . Although the mechanism of action of AMPs has not yet been fully elucidated, novel antimicrobial peptides can be designed based on their known defining characteristics, such as their short length, the formation of helices and β-sheets as secondary structures, their cationic net charge and their amphipathic nature. In the literature, some research groups improved specific peptides for pathogen detection by rational design. To do so, first, AMP sequences were isolated from living organisms, which generally resulted in the elucidation of the structure and design requirements for the synthetic construction of novel AMPs. Then, small mutations, such as deletions of a single amino acid, were made to increase their net charge, improving their selectivity towards bacterial pathogens . Rational design may also be used to confer specificity to an existing partially selective antimicrobial peptide through the addition of a species-specific targeting domain to a broad-spectrum AMP. For example, Eckert et al. screened a set of short (8 to 12 amino acids) fluorescently labeled peptides with varying physicochemical characteristics against Pseudomonas spp. to find the strongest binding candidates. Out of those, a single peptide exhibiting selectivity against Pseudomonas (KKHRKHRKHRKH) was identified. They then introduced a linker sequence (GGSGGS) to incorporate the selective short peptide onto the C-terminal of Novispirin (KNLRRIIRKGIHIIKKYG), an AMP known to have lytic activity against a wide range of bacteria. The resulting chimeric peptide was able to selectively retard Pseudomonas growth and leave E. coli and Streptococcus mutans virtually unaffected .
Phage display is a Nobel Prize-winning in vitro technique that selects phage clones expressing the peptides with the highest affinity to a specific target from an initial pool of candidates . The process closely resembles evolution, as it relies on the fabrication of billions of “candidates” through DNA recombination in viral particles and the selection of the fittest through consecutive cycles of high-affinity purification. The application of phage-displayed probes for biosensors in various fields has been recently discussed in several reviews . In order to find a peptide with high affinity to a selected target, an initial “library” is generated by inserting random foreign DNA sequences into bacteriophages, which, in turn, express the corresponding peptide on their outside coating. This will result in a heterogeneous mix of around a billion phage clones bearing different foreign peptides. The most commonly used type of phage for this technique is the M13 bacteriophage, which possesses five different coat proteins: pIII, pVI, pVII and pXI present in five copies each, and pVIII presents in around 2700 copies all along its filament . In general, phages express foreign peptides with between 7 and 15 amino acids, depending on the library. Phages expressing foreign peptides on the pVIII coat protein, such as those from the f8/8 library, express thousands of foreign peptides in a compact, reiterating pattern over the whole length of the phage capsid . These are also called “landscape phages”, as this multivalent display of the foreign peptide may result in the activity of not only the single peptide and its immediate surroundings but also global functions of the entire surface “landscape” . By contrast, a monovalent peptide display refers to its expression solely on pIII, present in five copies at one end of the filament. Monovalent display libraries are the most commonly used for bacterial targets. The classical phage display protocol consists of six main steps : First, the incubation of the initial phage library takes place over a surface on which the targets have been previously immobilized. Upon incubation, the phages with higher affinity to the target are attached, and those with low or no affinity remain unbound in the solution. The subsequent washing and elution steps with increasing levels of stringency apply selective pressure, the former to remove unbound phages and the latter to release bound phages with a higher affinity for the target analyte. Both steps must be performed without losing the virion’s infectivity for the next step to take place. Afterwards, E. coli host cells are added, and the process of amplification begins as the phages infect host cells for replication and, consequently, thousands more copies of those phages expressing the fittest peptides are produced. Finally, after three or four selection cycles (steps 1–4), the amino acid sequences of the few selected peptides with the highest affinity for the target are determined by sequencing the genetic material of the phage. They can then be synthesized and used as bioreceptors for the development of biosensors. Phage displayed peptides have been used for foodborne pathogen detection because of their remarkable selectivity towards a specific species or strain, contrary to the wide-spectrum binding of other peptides. shows a list of peptides selected by phage display with a foodborne pathogen as the target. The phage display approach yields the consensus sequences of the best binding candidates under a specific set of experimental conditions. However, different sequences may be found upon making slight changes . Thus, there are a few challenging aspects to take into account for a successful experimental design. For example, bacterial membranes are complex mixtures of proteins and phospholipids and express different antigens on their surfaces , which makes experiments more challenging than when dealing with simpler and smaller targets. Furthermore, the surface epitopes bacteria expressed when immobilized may be different to those expressed in the solution and different still to those expressed when in natural environments or food matrices. In this case, it is not surprising that the peptides obtained from immobilized bacteria would be different to those found when that same target is suspended in a buffer solution. However, a conclusion has not been reached on whether one strategy is inherently better than the other. Sorokulova et al. compared various methodologies when biopanning the f8/8 phage against the target Salmonella enterica Typhimurium, presenting it both in the solution and surface-immobilized for comparison. They found the peptide with the highest specificity when using immobilized bacteria . On the contrary, McIvor et al. tested a comparison between the two methodologies and found the opposite: for L. monocytogenes , the clones with specific binding to the serovar were found exclusively when working with bacterial suspensions in Phosphate-Buffered Saline (PBS) buffer . During a phage display experiment, non-target-specific peptides may be enriched inadvertently . Peptides with specificity to either the immobilization surface , the blocking solution, the capture molecule (biotin or streptavidin) or support of the target are called “target-unrelated peptides” (TUPs) . In order to avoid them, “subtractive biopanning” is advisable—that is, incubating the initial library onto the biopanning infrastructure containing all elements except the target to eliminate all potential TUPs. The most comprehensive list to date of TUP amino acid sequences reported in the literature is the “Scanner And Reporter Of Target-Unrelated Peptides” (SAROTUP) . Subtractive biopanning may also be used to find a peptide with higher selectivity. In this case, negative selection can be done by incubating the library with another microorganism similar to the target in a Gram stain or species in order to deplete it of those clones binding to motifs common to several bacteria and eliminating the possibility of cross-reactivity later on. Rao et al. used this strategy when biopanning against Staphylococcus aureus , performing pre-adsorptions against E. coli to subtract clones binding to Gram-negative bacteria and subsequently against Staphylococcus epidermis to eliminate those binding to the Staphylococcus genus . In contrast, McIvor et al. opted to perform negative selection in the last biopanning round, incubating peptides with Listeria innocua from the same genus as the target, L . monocytogenes to find peptides with higher specificity towards the latter . These data show that the biopanning step is crucial to obtain the most selective sequences, and there is not one experimental solution but a conjugation of selection steps that can help to obtain target-specific sequences. Interestingly, it is precisely the limitations of a blind, unbiased experiment that can be used to our advantage. Phage display has recently gained interest for finding new specific receptors that could identify surface motifs or structures that may not have been previously identified by other approaches . Additionally, an exceptional advantage of phage display is the fact that it bypasses the requirement for target immunogenicity, one of the main limitations of in vivo antibody production . Furthermore, a bacterial strain may not express surface epitopes that are both unique and antigenic, which would render the produced antibodies incapable of discriminating between strains of the same genus . McIvor et al. reported a notable example when testing commercial antibody-coated beads that were not able to differentiate between L. monocytogenes and L. innocua . They hypothesized that, in Listeria , the immunodominant epitopes may be shared between species, making it difficult for antibody probes to identify the less immunogenic but crucially specific epitopes that differentiate serovars . Additionally, the accurate detection of highly mutagenic foodborne pathogens, such as Norovirus, would likely need new specific probes to be developed every few years. In this case, the ease of adaptability and low cost of the phage display methodology would make it a much more attractive approach than the cost and laboriousness of constant antibody development . It is important to note that the categories of peptides mentioned in this review are not mutually exclusive, and some phage-displayed peptides may meet the same structural and net charge criteria as AMPs of living organisms, thus exhibiting antimicrobial activity against bacteria . This review focuses specifically on biosensing platforms using peptides as detection probes; however, either the synthetized specific peptide or the entire phage may be incorporated in biosensors. The use of phages displaying specific peptides as sensing elements has been thoroughly reviewed in previously published reviews [ , , , ].
Another strategy for peptide rational design is based on in silico tools. Indeed, in the last two decades, molecular simulations have shown to be a powerful theoretical technique to study peptide structures and dynamics [ , , ]. There are three main approaches for designing antimicrobial peptides: the modification of known AMP sequences, biophysical modeling and virtual screening . First, the sequence modification approach consists of using known AMP sequences as templates and subsequently modifying of one or more amino acids to identify the most crucial amino acids and their positions for antimicrobial activity or to elucidate the role of certain motifs present in the peptide on its overall mechanism of action . Wiradharma et al. designed short AMPs by using repeats of hydrophobic and cationic residues known to confer antimicrobial activity. In this way, they found peptides with increased antimicrobial activity against Gram-positive bacteria and selectivity towards microbial cells . Likewise, chimeric peptides with increased selectivity can be designed by combining motifs of wide-spectrum AMPs with targeted ones . Second, biophysical modeling relies on the design of AMPs based on structural motifs and their properties, accounting for their interactions with the bacterial membrane and the media around them. In this case, the use of molecular dynamics simulations can lead to the improvement of antimicrobial activity. Third, virtual screening approaches are used to explore sequence iterations that may prove too difficult to test using other screening techniques. These include the use of bioinformatics tools, such as machine learning methods, evolutionary algorithms and stochastic approaches. The development of online software capable of predicting AMPs derived from a given protein, such as the “Antibacterial peptides” (AntiBP) and the “Collection of Antimicrobial Peptides” (CAMP) servers, have facilitated the design of new peptides by using Quantitative Matrices, Artificial Neural Networks and Support Vector Machines. Recently, Yang et al. predicted, designed and validated an AMP derived from the sequence of the small subunit of Penaeus vannamei hemocyanin (PvHS) using these servers. Two out of the twelve predicted peptides showed strong antimicrobial activity on Gram-negative and Gram-positive bacteria . Subsequently, the team synthetized them and performed a structural analysis revealing a β-sheet structure, and scanning electron microscopy confirmed the peptide’s ability to disrupt the bacterial membrane.
Furthermore, a library of short peptides may be produced and screened using larger proteins or enzymes with specific activity as the starting templates. For example, Palmieri et al. combined in silico predictions and docking simulations to design short peptides from the protein CPT-1A (carnitine palmitoyl transferase 1a), predicting advantageous mutations that would confer increased antimicrobial activity to candidate peptides. In this way, the team found two peptides with antimicrobial activity against L. monocytogenes . In another recent example, Mardirossian et al. designed short peptide fragments from the larger 25 amino acid peptide Bac5, a proline-rich AMP, for their use as antibiotics and tested their activity against E. coli , S. aureus , P. aeruginosa and other bacteria. They found the minimum length required for mammalian AMPs to keep their antimicrobial activity is 17 amino acids .
Once the peptide bioreceptors have been selected, the next critical step is to graft them onto a sensor surface for target detection. Peptides are well suited for biochip functionalization, as they are resistant to air-drying without a significant loss of activity . Furthermore, they can be easily grafted with different surface chemistry strategies. Effective target binding can be largely dependent on the immobilization methodology, as it plays a significant role in the number of nonspecific binding events, the amount of background noise and the reproducibility and repeatability of biosensor manufacturing. Several strategies for peptide immobilization have been tested and compared in the quest to find the most efficient one . Different sensor setups may have specific requirements. Furthermore, the orientation of the immobilized peptide on a gold surface can be easily chosen by adding the cysteine anchor at either the C- or the N-terminus. Several groups have found that the best binding and the highest antimicrobial activity are maintained when immobilizing peptides through the addition of cysteine at the C-terminus, suggesting that, in vivo, the free N-terminal performs the first interaction with the bacteria due to its positive charge [ , , ]. Short linear peptides can reach a much higher surface density than other biomaterials and are able to form a uniform grafted layer due to the spontaneous formation of self-assembled monolayers (SAMs) onto gold surfaces. This is advantageous for biosensors, as it has been observed that the ability of a peptide to capture target bacteria is strongly dependent on its concentration and density on the sensor surface . Additionally, bacterial binding to peptides results in a lower steric hindrance than to antibodies, allowing for a higher binding avidity for the target per surface unit . To further increase the immobilization surface area, peptides may also be grafted onto nanoparticles. This is especially interesting when testing extremely small sample volumes that require a higher surface-to-volume ratio, which is the case for microfluidic devices . In another recent work, Baek et al. tested the importance of the flexibility on peptide–target interactions by introducing a rigid linker (–EAAAK-) rather than a flexible one (–GGGGS-), both at the C-terminus of the peptide and in the middle, and comparing their performances to those of the native sequences. They found the best binding to Norovirus when introducing the flexible linker, highlighting the advantages of using these short molecules as flexible probes . Peptides have been used in a wide variety of biosensors for foodborne pathogen detection, as seen in . They target either the bacterial surface or molecules emitted from bacteria to the medium. Consequently, peptide-based biosensors do not require complex sample preparation prior to analysis, which significantly shortens the analysis time and enables rapid and low cost diagnostics in foodstuffs. The majority of the developed platforms have had different transducer methodologies, such as electrochemical, optical, mechanical or hybrids of the aforementioned. In the following section, an overview of peptide-based biosensors based on various transduction systems for foodborne pathogen detection is presented. 3.1. Electrochemical Peptide-Based Biosensors Biosensors based on electrochemistry are extensively developed for bacterial detection because of their high sensitivity, rapidity and low cost. They can be classified as amperometric, voltametric, conductometric, potentiometric or impedimetric—the last two being the most used for foodborne pathogen detection. The use of peptides on electrochemical biosensors for the detection of foodborne pathogens was reviewed recently . Potentiometric sensing is based on the measurement of the potential differences between the working electrode and the reference electrode in the absence of an electrical charge flowing between them. Although it has many advantages, such as low cost, ease of use and rapidity, potentiometric biosensors require having control of the ionic strength of the sample. Otherwise, the different charges of species in the sample may interfere and lead to a potentiometric response, generating a false-positive result . Lv et al. developed a potentiometric sandwich assay using short AMPs for the detection of L. monocytogenes in spiked seawater samples ( ). For this, the original long AMP with a well-defined structure for L. monocytogenes was split into two fragments in order to serve as the peptide pairs for the sandwich assay. They succeeded in eliminating background interferences from the complex matrix and from other pathogenic bacteria with the addition of a magnetic separation step with Leucocin A-coated magnetic nanoparticles (MNPs) and the use of an online filtration system for the preconcentration of the target. The whole 60 min assay reached a limit of detection (LOD) of 10 CFU mL −1 without having a significant response to other bacteria, even those of the same Gram stain or of the same genus . Electrochemical impedance spectroscopy (EIS) measures the impedance over a suitable frequency range through the application of a small sinusoidally varying potential. EIS biosensors offer simple instrumentation, ease of assembly and operation, adaptability to miniaturized devices and compatibility with multiplex detection . These biosensors have achieved remarkably low LODs and linear detection ranges of up to six orders of magnitude for foodborne pathogens. Some of the first efforts for bacterial detection using AMPs were developed using EIS. Notably, Mannoor et al. immobilized the semi-selective AMP Magainin I onto an interdigitated gold electrode (GE) array via a C-terminal cysteine residue thanks to the formation of SAMs ( ). Their microcapacitive biosensor demonstrated both Gram-selective detection, as well as interbacterial strain differentiation with detection limits of 1 × 10 3 CFU mL −1 , a clinically relevant detection range . Since then, various breakthroughs in EIS biosensor performances have been achieved. Shi et al. put two phage display peptides specific to E. coli O157:H7 on a three-electrode system, capable of detecting 20 CFU mL −1 with only a 30 min incubation, which presents a remarkable improvement on LODs. Notably, Wilson et al. were able to detect E. coli with a LOD of 1 CFU mL −1 in potable water and 3.5 CFU mL −1 in apple juice without sample preparation and within only 25 min ( ). First, they subjected the sample to a preconcentration step using magnetic nanoparticles coated with Melittin. Next, EIS measurements were performed using an interdigitated electrode array screen-printed onto the PET substrate as an inexpensive alternative to gold electrodes that require photolithography. Their system showed good repeatability and stability . In contrast, Baek et al. selected a much smaller target, the human norovirus. They immobilized eight norovirus-specific phage display peptides onto the screen-printed working electrode through the formation of SAMs ( ). The obtained biosensors were able to detect copies 1.7 mL −1 from the oyster samples in 30 min without signal interference from another pathogenic species present, the rotavirus. This outstanding performance resulted in a biosensor much more sensitive than classical detection. Such a system provides a promising strategy for the identification and quantification of norovirus food contaminants with minimized sample preparations and volumes . A common feature in EIS biosensors is the correlation of bacterial concentrations with impedance signals at low frequencies, which is an indication that impedance is related to charge transfer properties on the surface of the electrode. However, at higher frequencies bacterial concentrations show less influence on impedance, suggesting that, at that stage, the dielectric relaxation of small dipoles, including water molecules, becomes more dominant in impedance changes . Photoelectrochemical (PEC) techniques differ slightly from other electrochemical methods in that an applied light source generates electron excitation and charge transfer from a photoexcited material, which is semiconductive and converts visible light into a photocurrent. Yin et al. chose upconversion nanophores (UCNPs), a fluorophore able to transfer photon energy into luminescence to develop a PEC lab-on-paper platform triggered by near infrared (NIR) light for the detection of E. coli O157:H7 in food samples ( ) . NIR light is suited for biosensor use, as it possesses low phototoxicity, and better biocompatibility than ultraviolet (UV) light , which may result in serious interference and unstable signals . Using Magainin I peptides as bioreceptors grafted onto paper working electrodes (PWE), the obtained biosensors demonstrated preferential binding to E. coli O157:H7, with the only mild interfering response obtained with S. typhimurium . They further improved the upconversion luminescence properties of their substrate by using silver nanoparticles (AgNPs) and exploiting their localized SPR (LSPR) effects, achieving the lowest limit of detection for Magainin I reported to date for this bacterium: 2 CFU mL −1 , even when testing in complex food matrices. 3.2. Optical Peptide-Based Biosensors Optical biosensors quantify analytes through the correlation of binding events with a measurable characteristic of light waves. They are often based on the measurement of absorbance, reflectance or fluorescence emissions that occur in the UV, visible or NIR light spectra . Optical biosensors may either require labels, such as colorimetric or fluorescent approaches, or be label-free, such as biosensors based on SPR. The main advantages of these biosensors are reproducibility, sensitivity, the possibility of adaptation for multiplex detection and rapidity. Labeled biosensors measure colorimetric or fluorescent changes that occur upon the interaction of a chromophore or fluorophore with the analyte. They consist of four elements: a light source, a wavelength selection device, a substrate in which changes will occur upon interaction with analytes and a detector sensitive to the wavelength of interest . Fluorescence occurs when an electron is excited and a photon is emitted from an excited singlet state, and then, it relaxes to the ground state. This electron typically belongs to an aromatic molecule capable of producing fluorescence, called a fluorophore, which may be a dye, a product from an enzymatic reaction or a nanomaterial, such as nanoclusters (NCs) or quantum dots (QDs) . Fluorescence is by far the most popular approach for optical detection due to its high sensitivity, as the emission of even a single photon may be sufficient to quantify it . It is widely used in biosensing applications, as it is simple to set up, easily measured by fluorescence spectroscopy and it is normally the first proof-of-concept approach, such as in the case of ELISA immunoassays. Some of the lowest limits of detection reported to date have been the result of the incorporation of phage display peptides onto optical biosensors, being two orders of magnitude lower than those reached when using AMPs. Li et al. achieved an optical biosensor for the simultaneous detection of three pathogens ( E. coli , L. monocytogenes and B. melitensis ) using phage display peptides and multicolor QDs. For this, peptides were immobilized onto magnetic beads (MBs) for the recognition and enrichment of targets from the complex sample matrix. Then, three QD probes with different emission wavelengths were functionalized with three polyclonal antibodies, respectively. By mixing the functionalized MBs and QDs, they obtained peptide MBs–pathogen–QD probes sandwich immune complexes, which allowed for the simultaneous fluorescence detection of three pathogens. Their highly sensitive and specific 100 min assay was able to differentiate and quantify the three foodborne pathogens ( ) . Colorimetric biosensors measure absorbance or reflectance events in the UV–Vis spectrum upon the interaction of chromophores with one or more analytes. These sensors often include nanomaterials, such as nanoparticles and nanosheets (NSs) as reporter structures . Colorimetric platforms are commonly used for foodborne pathogen detection using peptides due to their versatility. Gold nanoparticle (AuNP)-based colorimetric assays have been widely used for biosensing, as they have unique surface plasmon resonance corresponding to their dispersion or aggregation state. Moreover, the concentration changes of targets can induce color changes visible to the naked eye. Liu et al. designed a colorimetric biosensor for the detection of S. aureus on various real water samples by immobilizing specific phage display peptides onto cysteamine-modified AuNPs (CS-AuNPs) ( ). Such functionalized NPs aggregated quickly in the presence of the target S. aureus and were successfully used to detect the pathogen within 30 min with a LOD of 19 CFU mL −1 and excellent selectivity over other bacteria. This approach is particularly interesting due to its sensitivity, specificity and rapidity, with no need for any costly instrument . Horseradish peroxidase (HRP) is an enzyme that is widely used in immunoassays such as ELISA due to its ability to catalyze the conversion of chromogenic substrates into colored products or produce light when acting on chemiluminescent substrates . Qiao et al. bioconjugated AMP Magainin I with HRP through a biotin–streptavidin interaction for the rapid and extremely sensitive colorimetric detection of E. coli O157:H7 in apple juice and ground beef ( ). The AMP–HRP conjugate, used as a signal reporter, bound to LPS on the surface of the Gram-negative bacteria, followed by a filtration step to reduce non-specific binding and steric effects. After which, the bacterial concentration could be easily visualized and quantified by UV–Vis absorption measurements. Their system could detect E. coli O157:H7 as low as 13 CFU mL −1 in a pure culture with a linear range of 10 2 –10 5 CFU mL −1 in 45 min without pre-enrichment . Although widely used in biochemistry, HRP has some limitations, such as high cost and low stability in some food matrices and over time. Consequently, there has been a surge in the search for stable, lower-cost inorganic nanomaterials with peroxidase-like activity. Such is the case of manganese dioxide nanosheets (MnO 2 NSs) used by Liu et al. to immobilize specific peptides for the detection of Vibrio parahaemolyticus in water and seafood samples ( ). In this case, 9-mer phage display peptides were both fused to MnO 2 NSs to create a MnO 2 NSs@peptide complex and immobilized by physical adsorption onto a surface. In order to perform a sandwich immunoassay, bacteria were first incubated for two hours onto the peptide-grafted surface to ensure binding. Next, the MnO 2 NSs@peptide fusion was added for one hour to create the sandwich complex. Finally, chromogenic tert-Butyl carbamate (TMB) was added for 30 min, which resulted in color changes according to the bacterial concentration, determined by absorbance measurements at 652 nm. Their system showed a wide detection range (20–10 4 CFU mL −1 ), a LOD of 15 CFU mL −1 and excellent selectivity. Finally, a practical performance was successfully demonstrated by spiking marine samples with recoveries from 98.0 to 102.5% . As for label-free optical techniques, SPR-based sensing is commonly used for foodborne pathogen detection. SPR biosensors measure the changes in the refractive index in a dielectric medium due to the excitation of surface plasmons at the interface between said medium and a thin metal film, usually gold . Its main advantages are its capability for real-time, label-free detection with high sensitivity. Surface plasmon resonance imaging (SPRI) is a multiplex SPR approach based on an imaging mode. It allows for simultaneous monitoring of the interactions between the analyte and hundreds of sensors on the same chip with a temporal response and kinetic information, which may provide additional discriminatory parameters . Pardoux et al. developed a prism coupler-based SPR biosensor using a five AMP microarray for the detection of pathogenic bacteria. The detection of five different pathogens by SPRI can be achieved in an 18 h single step, as it is a label-free technique in which no pre-enrichment is required. In this case, the wide-spectrum recognition of AMPs was particularly relevant, as the differing levels of affinity characteristic of these peptides created a cross-reactive sensor matrix that, coupled with multivariate analyses, was able to accurately discriminate between bacteria ( ). Furthermore, they achieved some of the lowest LODs for E. coli O157:H7, S. epidermis and S. typhimurium , detecting 51, 16 and 6 CFU mL −1 , respectively . Zhou et al. developed a wave guide coupler-based SPR biosensor using optical fibers for the detection of pathogenic Gram-negative E. coli O157:H7 in water and juice using Magainin I as a bioreceptor and AgNP-reduced graphene oxide (AgNP-rGO) nanocomposites for signal amplification ( ). The biosensor had a LOD of 5 × 10 2 CFU mL −1 and showed little to no interference of nonpathogenic or Gram-positive bacteria present in the sample and remarkable reproducibility, obtaining a 4.2% relative standard deviation (RSD) in five biosensors constructed in parallel . Electrochemiluminescence (ECL), contrary to photoelectrochemistry, consists of monitoring the production of photons, namely the light intensity produced during the electrochemical reaction in a solution. This analytical method provides outstanding benefits: excellent sensitivity due to the absence of background noise, versatility, spatial and temporal resolution and electrochemical control of the reactivity. Li et al. incorporated Magainin I into an ECL platform in a sandwich assay for the highly specific detection of E. coli O157:H7 in water. They immobilized Magainin I onto the gold working electrode surface as a bioreceptor. Additionally, they labeled the peptide with a ruthenium complex (Ru1) ECL label, which increases the ECL intensity proportionally to the increasing bacterial concentrations in the sample. Their biosensor, which did not need any pre-enrichment or separation steps, achieved a LOD of 1.2 × 10 2 CFU mL −1 and allowed Magainin I to keep its characteristic selectivity towards Gram-negative bacteria ( ). However, it was not able to discriminate between pathogenic E. coli O157:H7 and S. typhimurium . 3.3. Nanomechanical Peptide-Based Biosensors Mechanical biosensors are based on the measurement of forces, displacements and mass changes . Most mechanical biosensors have a small cantilever sensitive to the molecule of interest. The microcantilever translates binding events into mechanical signals by monitoring deflection changes. Etayash et al. developed a microfluidic channel on a biomaterial cantilever (BMC) to detect L. monocytogenes functionalized with anti- L . monocytogenes monoclonal antibody and AMP Leucocin A in only a 50 picoliter volume ( ). Bacterial adsorption induced changes in the resonance frequency and cantilever deflection. When exciting the trapped bacteria with infrared radiation, the cantilever deflected in proportion to the infrared absorption of the bacteria, providing a nanomechanical infrared spectrum for selective bacterium identification. The Leucocin A-coated BMC exhibited preferential binding to L. monocytogenes two to three orders of magnitude higher than to E. coli . Furthermore, they achieved a limit of detection of 100 cells in 100 µL water samples. Through the incorporation of infrared absorption spectroscopy, they were able to accurately differentiate between injured and intact cells . summarizes peptide-based biosensors using various transduction systems together with their performances. Clearly, the excellent stability and low production cost make peptides very promising bioreceptors compared to antibodies. Most importantly, the performances of the obtained peptide-based biosensors are remarkable. Although various breakthroughs have been achieved, and in some cases, the biosensor performance is already comparable to that of the classical techniques or immunoassays, key challenges remain in foodborne pathogen detection biosensors. The main ones often concern the complexity of the food matrix itself due to its diverse composition, as well as the electrical charge of said components. In this media, the accurate detection of bacterial species might be especially challenging for peptide interactions that are dominated by electrostatic interactions. As an example, Etayash et al. succeeded in the discrimination of multiple species of pathogenic Gram-positive bacteria in buffer solutions. However, the results were not the same when working with pure milk samples, possibly due to the high protein composition of the sample . Another major challenge is cross-contamination from other microorganisms. When a biosensing platform is developed for a specific application, it is important to screen against all typically cross-reactive species in that particular ambit in order to validate its applicability, which several reported biosensors have failed to do . To address the inability of peptides to account for cross-contaminating dead bacteria, Fan et al. coupled their detection technique with a luciferase bioluminescence system to quantify ATP, a molecule only found in live organisms . Furthermore, there are varieties of proteases in different foods, especially unprocessed foods, which can degrade peptides into smaller molecules or single amino acids and inactivate them. These proteases, such as trypsin, thermolysine or carboxypeptidases, are one of the major limitations preventing the real-life application of peptide-based biosensors. However, the stability of peptides may be increased through chemical modifications that prevent enzymatic degradation, including click chemistry application to stabilize peptide dimerization or multimerization , replacement of an L-enantiomer by its D-enantiomer and conjugation of specific groups such as fatty acids or side-chain analogs to peptide side chains or N- or C-terminals . These fine-tunings make it difficult for proteases to recognize the cleavage sites, providing to the peptide a prominent proteolytic resistance. However, chemical modifications may decrease or inactivate the peptide recognition efficiency and stabilized peptides’ binding analytical properties must be tested before their implementation.
Biosensors based on electrochemistry are extensively developed for bacterial detection because of their high sensitivity, rapidity and low cost. They can be classified as amperometric, voltametric, conductometric, potentiometric or impedimetric—the last two being the most used for foodborne pathogen detection. The use of peptides on electrochemical biosensors for the detection of foodborne pathogens was reviewed recently . Potentiometric sensing is based on the measurement of the potential differences between the working electrode and the reference electrode in the absence of an electrical charge flowing between them. Although it has many advantages, such as low cost, ease of use and rapidity, potentiometric biosensors require having control of the ionic strength of the sample. Otherwise, the different charges of species in the sample may interfere and lead to a potentiometric response, generating a false-positive result . Lv et al. developed a potentiometric sandwich assay using short AMPs for the detection of L. monocytogenes in spiked seawater samples ( ). For this, the original long AMP with a well-defined structure for L. monocytogenes was split into two fragments in order to serve as the peptide pairs for the sandwich assay. They succeeded in eliminating background interferences from the complex matrix and from other pathogenic bacteria with the addition of a magnetic separation step with Leucocin A-coated magnetic nanoparticles (MNPs) and the use of an online filtration system for the preconcentration of the target. The whole 60 min assay reached a limit of detection (LOD) of 10 CFU mL −1 without having a significant response to other bacteria, even those of the same Gram stain or of the same genus . Electrochemical impedance spectroscopy (EIS) measures the impedance over a suitable frequency range through the application of a small sinusoidally varying potential. EIS biosensors offer simple instrumentation, ease of assembly and operation, adaptability to miniaturized devices and compatibility with multiplex detection . These biosensors have achieved remarkably low LODs and linear detection ranges of up to six orders of magnitude for foodborne pathogens. Some of the first efforts for bacterial detection using AMPs were developed using EIS. Notably, Mannoor et al. immobilized the semi-selective AMP Magainin I onto an interdigitated gold electrode (GE) array via a C-terminal cysteine residue thanks to the formation of SAMs ( ). Their microcapacitive biosensor demonstrated both Gram-selective detection, as well as interbacterial strain differentiation with detection limits of 1 × 10 3 CFU mL −1 , a clinically relevant detection range . Since then, various breakthroughs in EIS biosensor performances have been achieved. Shi et al. put two phage display peptides specific to E. coli O157:H7 on a three-electrode system, capable of detecting 20 CFU mL −1 with only a 30 min incubation, which presents a remarkable improvement on LODs. Notably, Wilson et al. were able to detect E. coli with a LOD of 1 CFU mL −1 in potable water and 3.5 CFU mL −1 in apple juice without sample preparation and within only 25 min ( ). First, they subjected the sample to a preconcentration step using magnetic nanoparticles coated with Melittin. Next, EIS measurements were performed using an interdigitated electrode array screen-printed onto the PET substrate as an inexpensive alternative to gold electrodes that require photolithography. Their system showed good repeatability and stability . In contrast, Baek et al. selected a much smaller target, the human norovirus. They immobilized eight norovirus-specific phage display peptides onto the screen-printed working electrode through the formation of SAMs ( ). The obtained biosensors were able to detect copies 1.7 mL −1 from the oyster samples in 30 min without signal interference from another pathogenic species present, the rotavirus. This outstanding performance resulted in a biosensor much more sensitive than classical detection. Such a system provides a promising strategy for the identification and quantification of norovirus food contaminants with minimized sample preparations and volumes . A common feature in EIS biosensors is the correlation of bacterial concentrations with impedance signals at low frequencies, which is an indication that impedance is related to charge transfer properties on the surface of the electrode. However, at higher frequencies bacterial concentrations show less influence on impedance, suggesting that, at that stage, the dielectric relaxation of small dipoles, including water molecules, becomes more dominant in impedance changes . Photoelectrochemical (PEC) techniques differ slightly from other electrochemical methods in that an applied light source generates electron excitation and charge transfer from a photoexcited material, which is semiconductive and converts visible light into a photocurrent. Yin et al. chose upconversion nanophores (UCNPs), a fluorophore able to transfer photon energy into luminescence to develop a PEC lab-on-paper platform triggered by near infrared (NIR) light for the detection of E. coli O157:H7 in food samples ( ) . NIR light is suited for biosensor use, as it possesses low phototoxicity, and better biocompatibility than ultraviolet (UV) light , which may result in serious interference and unstable signals . Using Magainin I peptides as bioreceptors grafted onto paper working electrodes (PWE), the obtained biosensors demonstrated preferential binding to E. coli O157:H7, with the only mild interfering response obtained with S. typhimurium . They further improved the upconversion luminescence properties of their substrate by using silver nanoparticles (AgNPs) and exploiting their localized SPR (LSPR) effects, achieving the lowest limit of detection for Magainin I reported to date for this bacterium: 2 CFU mL −1 , even when testing in complex food matrices.
Optical biosensors quantify analytes through the correlation of binding events with a measurable characteristic of light waves. They are often based on the measurement of absorbance, reflectance or fluorescence emissions that occur in the UV, visible or NIR light spectra . Optical biosensors may either require labels, such as colorimetric or fluorescent approaches, or be label-free, such as biosensors based on SPR. The main advantages of these biosensors are reproducibility, sensitivity, the possibility of adaptation for multiplex detection and rapidity. Labeled biosensors measure colorimetric or fluorescent changes that occur upon the interaction of a chromophore or fluorophore with the analyte. They consist of four elements: a light source, a wavelength selection device, a substrate in which changes will occur upon interaction with analytes and a detector sensitive to the wavelength of interest . Fluorescence occurs when an electron is excited and a photon is emitted from an excited singlet state, and then, it relaxes to the ground state. This electron typically belongs to an aromatic molecule capable of producing fluorescence, called a fluorophore, which may be a dye, a product from an enzymatic reaction or a nanomaterial, such as nanoclusters (NCs) or quantum dots (QDs) . Fluorescence is by far the most popular approach for optical detection due to its high sensitivity, as the emission of even a single photon may be sufficient to quantify it . It is widely used in biosensing applications, as it is simple to set up, easily measured by fluorescence spectroscopy and it is normally the first proof-of-concept approach, such as in the case of ELISA immunoassays. Some of the lowest limits of detection reported to date have been the result of the incorporation of phage display peptides onto optical biosensors, being two orders of magnitude lower than those reached when using AMPs. Li et al. achieved an optical biosensor for the simultaneous detection of three pathogens ( E. coli , L. monocytogenes and B. melitensis ) using phage display peptides and multicolor QDs. For this, peptides were immobilized onto magnetic beads (MBs) for the recognition and enrichment of targets from the complex sample matrix. Then, three QD probes with different emission wavelengths were functionalized with three polyclonal antibodies, respectively. By mixing the functionalized MBs and QDs, they obtained peptide MBs–pathogen–QD probes sandwich immune complexes, which allowed for the simultaneous fluorescence detection of three pathogens. Their highly sensitive and specific 100 min assay was able to differentiate and quantify the three foodborne pathogens ( ) . Colorimetric biosensors measure absorbance or reflectance events in the UV–Vis spectrum upon the interaction of chromophores with one or more analytes. These sensors often include nanomaterials, such as nanoparticles and nanosheets (NSs) as reporter structures . Colorimetric platforms are commonly used for foodborne pathogen detection using peptides due to their versatility. Gold nanoparticle (AuNP)-based colorimetric assays have been widely used for biosensing, as they have unique surface plasmon resonance corresponding to their dispersion or aggregation state. Moreover, the concentration changes of targets can induce color changes visible to the naked eye. Liu et al. designed a colorimetric biosensor for the detection of S. aureus on various real water samples by immobilizing specific phage display peptides onto cysteamine-modified AuNPs (CS-AuNPs) ( ). Such functionalized NPs aggregated quickly in the presence of the target S. aureus and were successfully used to detect the pathogen within 30 min with a LOD of 19 CFU mL −1 and excellent selectivity over other bacteria. This approach is particularly interesting due to its sensitivity, specificity and rapidity, with no need for any costly instrument . Horseradish peroxidase (HRP) is an enzyme that is widely used in immunoassays such as ELISA due to its ability to catalyze the conversion of chromogenic substrates into colored products or produce light when acting on chemiluminescent substrates . Qiao et al. bioconjugated AMP Magainin I with HRP through a biotin–streptavidin interaction for the rapid and extremely sensitive colorimetric detection of E. coli O157:H7 in apple juice and ground beef ( ). The AMP–HRP conjugate, used as a signal reporter, bound to LPS on the surface of the Gram-negative bacteria, followed by a filtration step to reduce non-specific binding and steric effects. After which, the bacterial concentration could be easily visualized and quantified by UV–Vis absorption measurements. Their system could detect E. coli O157:H7 as low as 13 CFU mL −1 in a pure culture with a linear range of 10 2 –10 5 CFU mL −1 in 45 min without pre-enrichment . Although widely used in biochemistry, HRP has some limitations, such as high cost and low stability in some food matrices and over time. Consequently, there has been a surge in the search for stable, lower-cost inorganic nanomaterials with peroxidase-like activity. Such is the case of manganese dioxide nanosheets (MnO 2 NSs) used by Liu et al. to immobilize specific peptides for the detection of Vibrio parahaemolyticus in water and seafood samples ( ). In this case, 9-mer phage display peptides were both fused to MnO 2 NSs to create a MnO 2 NSs@peptide complex and immobilized by physical adsorption onto a surface. In order to perform a sandwich immunoassay, bacteria were first incubated for two hours onto the peptide-grafted surface to ensure binding. Next, the MnO 2 NSs@peptide fusion was added for one hour to create the sandwich complex. Finally, chromogenic tert-Butyl carbamate (TMB) was added for 30 min, which resulted in color changes according to the bacterial concentration, determined by absorbance measurements at 652 nm. Their system showed a wide detection range (20–10 4 CFU mL −1 ), a LOD of 15 CFU mL −1 and excellent selectivity. Finally, a practical performance was successfully demonstrated by spiking marine samples with recoveries from 98.0 to 102.5% . As for label-free optical techniques, SPR-based sensing is commonly used for foodborne pathogen detection. SPR biosensors measure the changes in the refractive index in a dielectric medium due to the excitation of surface plasmons at the interface between said medium and a thin metal film, usually gold . Its main advantages are its capability for real-time, label-free detection with high sensitivity. Surface plasmon resonance imaging (SPRI) is a multiplex SPR approach based on an imaging mode. It allows for simultaneous monitoring of the interactions between the analyte and hundreds of sensors on the same chip with a temporal response and kinetic information, which may provide additional discriminatory parameters . Pardoux et al. developed a prism coupler-based SPR biosensor using a five AMP microarray for the detection of pathogenic bacteria. The detection of five different pathogens by SPRI can be achieved in an 18 h single step, as it is a label-free technique in which no pre-enrichment is required. In this case, the wide-spectrum recognition of AMPs was particularly relevant, as the differing levels of affinity characteristic of these peptides created a cross-reactive sensor matrix that, coupled with multivariate analyses, was able to accurately discriminate between bacteria ( ). Furthermore, they achieved some of the lowest LODs for E. coli O157:H7, S. epidermis and S. typhimurium , detecting 51, 16 and 6 CFU mL −1 , respectively . Zhou et al. developed a wave guide coupler-based SPR biosensor using optical fibers for the detection of pathogenic Gram-negative E. coli O157:H7 in water and juice using Magainin I as a bioreceptor and AgNP-reduced graphene oxide (AgNP-rGO) nanocomposites for signal amplification ( ). The biosensor had a LOD of 5 × 10 2 CFU mL −1 and showed little to no interference of nonpathogenic or Gram-positive bacteria present in the sample and remarkable reproducibility, obtaining a 4.2% relative standard deviation (RSD) in five biosensors constructed in parallel . Electrochemiluminescence (ECL), contrary to photoelectrochemistry, consists of monitoring the production of photons, namely the light intensity produced during the electrochemical reaction in a solution. This analytical method provides outstanding benefits: excellent sensitivity due to the absence of background noise, versatility, spatial and temporal resolution and electrochemical control of the reactivity. Li et al. incorporated Magainin I into an ECL platform in a sandwich assay for the highly specific detection of E. coli O157:H7 in water. They immobilized Magainin I onto the gold working electrode surface as a bioreceptor. Additionally, they labeled the peptide with a ruthenium complex (Ru1) ECL label, which increases the ECL intensity proportionally to the increasing bacterial concentrations in the sample. Their biosensor, which did not need any pre-enrichment or separation steps, achieved a LOD of 1.2 × 10 2 CFU mL −1 and allowed Magainin I to keep its characteristic selectivity towards Gram-negative bacteria ( ). However, it was not able to discriminate between pathogenic E. coli O157:H7 and S. typhimurium .
Mechanical biosensors are based on the measurement of forces, displacements and mass changes . Most mechanical biosensors have a small cantilever sensitive to the molecule of interest. The microcantilever translates binding events into mechanical signals by monitoring deflection changes. Etayash et al. developed a microfluidic channel on a biomaterial cantilever (BMC) to detect L. monocytogenes functionalized with anti- L . monocytogenes monoclonal antibody and AMP Leucocin A in only a 50 picoliter volume ( ). Bacterial adsorption induced changes in the resonance frequency and cantilever deflection. When exciting the trapped bacteria with infrared radiation, the cantilever deflected in proportion to the infrared absorption of the bacteria, providing a nanomechanical infrared spectrum for selective bacterium identification. The Leucocin A-coated BMC exhibited preferential binding to L. monocytogenes two to three orders of magnitude higher than to E. coli . Furthermore, they achieved a limit of detection of 100 cells in 100 µL water samples. Through the incorporation of infrared absorption spectroscopy, they were able to accurately differentiate between injured and intact cells . summarizes peptide-based biosensors using various transduction systems together with their performances. Clearly, the excellent stability and low production cost make peptides very promising bioreceptors compared to antibodies. Most importantly, the performances of the obtained peptide-based biosensors are remarkable. Although various breakthroughs have been achieved, and in some cases, the biosensor performance is already comparable to that of the classical techniques or immunoassays, key challenges remain in foodborne pathogen detection biosensors. The main ones often concern the complexity of the food matrix itself due to its diverse composition, as well as the electrical charge of said components. In this media, the accurate detection of bacterial species might be especially challenging for peptide interactions that are dominated by electrostatic interactions. As an example, Etayash et al. succeeded in the discrimination of multiple species of pathogenic Gram-positive bacteria in buffer solutions. However, the results were not the same when working with pure milk samples, possibly due to the high protein composition of the sample . Another major challenge is cross-contamination from other microorganisms. When a biosensing platform is developed for a specific application, it is important to screen against all typically cross-reactive species in that particular ambit in order to validate its applicability, which several reported biosensors have failed to do . To address the inability of peptides to account for cross-contaminating dead bacteria, Fan et al. coupled their detection technique with a luciferase bioluminescence system to quantify ATP, a molecule only found in live organisms . Furthermore, there are varieties of proteases in different foods, especially unprocessed foods, which can degrade peptides into smaller molecules or single amino acids and inactivate them. These proteases, such as trypsin, thermolysine or carboxypeptidases, are one of the major limitations preventing the real-life application of peptide-based biosensors. However, the stability of peptides may be increased through chemical modifications that prevent enzymatic degradation, including click chemistry application to stabilize peptide dimerization or multimerization , replacement of an L-enantiomer by its D-enantiomer and conjugation of specific groups such as fatty acids or side-chain analogs to peptide side chains or N- or C-terminals . These fine-tunings make it difficult for proteases to recognize the cleavage sites, providing to the peptide a prominent proteolytic resistance. However, chemical modifications may decrease or inactivate the peptide recognition efficiency and stabilized peptides’ binding analytical properties must be tested before their implementation.
Limitations in classical sensing technologies have resulted in a surge in the exploration of innovative, nonconventional methodologies. In parallel with the development of biosensors, other sensor-based technology is emerging. A notable example is electronic nose, which takes a completely different approach for detecting the presence of pathogenic bacteria by analyzing their emitted volatile organic compounds (VOCs). Indeed, bacteria produce and emit VOCs that play a vital role in inter and intraorganismal communication. They may serve as signal molecules between species, chemical ‘manipulators’ to alter metabolic pathways, contribute to nutrient scavenging or participate in developmental processes . The bacterial headspace, referring to the gaseous mixture above a bacterial culture, has been the basis for microorganism identification, as VOCs can be species-specific . This type of detection is beginning to be explored due to its potential applications in the diagnosis of infectious diseases in humans, and great efforts have been made to characterize the VOC composition of patients’ exhaled breath, saliva, urine and feces at various states of health [ , , ]. In the food industry, efforts to detect specific VOCs indicative of freshness, adulteration and foodborne pathogen contamination at trace levels are ever growing, whether they are in food samples themselves, during the processing stages or in their packaging. Therefore, electronic noses could be relevant alternatives for foodborne pathogen detection [ , , ]. Since eNs require no sample preparation, they can be used to analyze and screen foodstuffs in all phases of production. Electronic noses are a broad class of instruments constituted by an array of chemical sensors with a partial specificity to VOCs coupled to a pattern–recognition system that detects and identifies odors . Their response to VOCs is a distinct and unique fingerprint-like recognition pattern usually stored in a database, which acts as a reference library to which future samples will be compared. These systems were inspired by the biological sense of smell, in which the sensation of smell is produced upon the binding of VOCs emitted by an object to odorant-binding proteins (OBPs), which relay the aromatic molecules onto olfactory receptors (ORs) located in the nose . Afterwards, olfactory neurons convey the received signal to the cortex of the brain, which oversees signal processing and interpreting for the identification of specific odors. Several groups have developed peptide-based eNs for foodborne pathogen detection. For example, the group of T.H. Park used a peptide derived from a natural olfactory receptor that can specifically recognize trimethylamine (TMA) to monitor seafood spoilage . TMA is an indicator VOC whose concentration in seafood increases after death due to the decomposition of trimethyl-N-oxide. In this case, single-wall carbon nanotube field effect transistors (SWCNT-FETs) functionalized with olfactory receptor-derived peptides (ORPs) were used to selectively detect TMA at a concentration of 10 fM in real time without sample pretreatment and with excellent selectivity ( ). Furthermore, the eN was able to discriminate between spoiled seafood and other food samples. Sankaran et al. synthetized a polypeptide derived from a Drosophila OBP named LUSH . The chosen 14-mer peptide included the protein’s sensing domain known to bind preferentially to alcohols, such as 3-methyl-1-butanol and 1-hexanol, characteristic odorants of Salmonella contamination. Four peptide receptors were grafted onto a QCM through the formation of SAMs. When testing packaged beef, they were able to detect 1 ppm of 3-methyl-1-butanol and 1-hexanol, a relevant LOD for industrial applications, with good repeatability and reproducibility . Similarly, Son et al. employed a 20-mer peptide derived from LUSH protein’s binding domain to detect Salmonella contamination in ham using a carbon nanotube field effect transistor (CNT-FET). They immobilized the peptide onto CNTs by π–π stacking through the addition of three phenylalanine amino acids at the C-terminus. Their system was able to detect 3-methyl-1-butanol at a concentration of 1 fM in real time ( ) . In a recent example, Shumeiko et al. succeeded in distinguishing between the odor of sterile growth medium, E. coli and Klebsiella pneumoniae by incorporating peptide-functionalized SWCNTs to a low-cost NIR photoluminescence optical nose for the detection of these species’ indicator VOCs. When dispersed in aqueous solutions, SWCNTs emit photoluminescence upon excitation with an appropriate wavelength . In this case, they used five peptides based on their ability to disperse SWCNTs in water and the resulting photoluminescence intensity. Upon the 60 s exposure of the sensor to E. coli and K. pneumonia , none of the five receptors were able to differentiate between sterile and spiked mediums. However, the accurate discrimination of the samples was achieved upon analyzing the recovery kinetics of the sensors, highlighting the crucial role of data processing in electronic noses . It is clear that the recent interest in developing eN platforms for bacterial detection has resulted in extremely sensitive instruments capable of real-time monitoring. However, the development of eNs with greater selectivity towards VOC targets may result in great breakthroughs. To this end, some attempts have been made at the adaptation of phage display panning for screening specific peptides for gas sensing, especially for the detection of explosives , but to the best of our knowledge, none has been used for food quality assessment or foodborne pathogen detection.
Biosensor technologies are very promising for the development of alternatives for pathogen detection with high sensitivity, low cost, rapid response and potential portable devices for an on-site analysis. The use of peptides as bioreceptors in biosensors is a growing field due to their versatility, increased stability in harsh conditions compared to other biomolecules, their compatibility with biosensor construction by maintaining their activity even after being dried and the possibility of finding or designing peptide sequences with affinities similar to those of antibodies. Furthermore, compared to the classical detection methods, one of the main advantages of peptide-based biosensors and electronic noses is that they bypass complex sample preparation, the most time-consuming and expensive step of foodborne pathogen detection, as their targets are either the bacterial surface epitopes or the emitted VOCs present in the headspace instead of the intracellular biomarkers. Multiple selection strategies have resulted in the creation of sensitive and selective peptide bioreceptors, each with their own advantages. AMPs have been part of living organism’s defense systems for millennia and mostly rely on membrane disruption and blocking metabolic functions of competing microorganisms. Most AMPs have a selectivity towards a certain type of microorganism, be it Gram-positive or -negative bacteria, yeasts or fungi. Additionally, their longevity as defenders of living organisms, and their mechanism of action, decrease the possibility for targeted microorganisms to develop resistance against them. Alternatively, phage display is one of the most notable recent developments in the field, achieving the screening of millions of peptide candidates for the selection of a few highly selective probes. The incorporation of phage display peptides has resulted in extremely sensitive biosensors able to discern in between serovars or strains of the same species, a feat AMPs are unable to perform for the most part. However, this strategy is limited by the fact that the presentation of the target to the peptides displayed by phages is determining, which can be especially challenging when dealing with bacterial targets, due to the complexity of the bacterial membrane. In the most recent years, the availability of bioinformatics tools has resulted in the development of much faster screening processes and are promising alternatives when screening using biological methods, such as phage display, is not feasible. These strategies are especially advantageous because they allow the user to explore millions of candidates in silico without having to synthetize them, making them a more cost-effective option. Furthermore, these approaches can be combined for better peptide selection, for example, by using known AMPs as starting templates to construct chimeric peptides with enhanced selectivity or iterating specific motifs known to confer given physicochemical or structural characteristics or further refine the specificity of peptides selected by phage display. Various peptide-based biosensors and electronic noses have been successfully developed for foodborne pathogen detection with good performances. For the future, in the age of miniaturization, there is a clear tendency in the biosensor field towards portable technologies, such as the use and integration of microfluidic devices. These devices pass extremely low sample volumes, down to pico-liter levels, through microchannels, usually designed computationally and fabricated with polymers using soft lithography [ , , , , ]. Although it is well suited for on-field applications, the extremely low volume of analysis may present a limitation. To ensure efficient detection, microfluidic devices may be used in a preconcentration step in order to obtain a smaller volume with a higher concentration, which may then be detectable with conventional methods. Regarding transduction techniques, there have been several breakthroughs. For example, in electrochemical transduction, the use of screen-printed electrodes and the development of photoelectrochemical biosensors have greatly improved the efficiency of detection. As for optical transducers, SPRI has proven to be a reliable approach, as it provides excellent sensitivity with the possibility to make multiplex detections simultaneously and provide kinetic parameters, which may result in improved discrimination. Finally, hybrid methodologies, such as electrochemiluminescence, show great promise due to their simple optical set up and versatility and exceptional sensitivity. Furthermore, the addition of nanomaterials to the sensing components for signal enhancement is a trend that has resulted in enormous improvements in foodborne pathogen biosensors. Nanomaterials have been incorporated into various types of transducers due to their capability of amplifying detection signals, which is a crucial factor for reaching a higher sensitivity. Recent works have demonstrated that the full potential of nanomaterials, such as nanoparticles, nanosheets, nanoclusters and quantum dots, is just beginning to be explored in depth, especially concerning their role in enhancing the performance of existing detection strategies [ , , , , , , ]. Finally, the commercial success of any one of these developed biosensors depends on their ability to reliably improve one of the major limitations of the classical techniques (i.e., detection time, portability, sensitivity or a combination of the aforementioned) while still being economically viable to implement at the industrial level.
|
Solubility
Enhancement of Active Pharmaceutical Ingredients
through Liquid Hydrotrope Addition: A Thermodynamic Analysis | bacb4449-0077-4205-bd9b-6ba1b5de738a | 11881036 | Pharmacology[mh] | Introduction In the complex world of
pharmaceutical development, the poor water
solubility of active pharmaceutical ingredients (APIs) is a critical
determinant that influences drug bioavailability and therapeutic efficacy.
The biopharmaceutical classification system (BCS) acts as a navigational
guide, classifying drugs into four categories (I to IV) based on their
solubility and permeability characteristics. Statistics reveal that approximately 40% of the existing market
APIs and about 90% of emerging APIs in research and clinical trials
exhibit poor water solubility, falling into classes II (low solubility
and high permeability) and IV (low solubility and low permeability)
according to the BCS classification. , The poor API
solubility can lead to delayed distribution within the body or, in
some instances, result in active ingredients being excreted without
absorption. Among the different available
methods for API solubility enhancement, , hydrotropy
is emerging as one of the most simple-to-apply and effective approaches. The concept of “hydrotropy” was first introduced
by German chemist Carl Neuberg in 1916. He defined hydrotropes (excipients) as amphiphilic compounds capable
of increasing the solubility of hydrophobic substances in water by
mechanisms other than micellar solubilization. These mechanisms include
hydrotrope self-aggregation, hydrotropic destruction of water structures,
and complexation between the hydrotrope and API. Amphiphilic compounds have hydrophilic (water-attracting)
and hydrophobic (water-repelling) parts within the molecule. Examples
of commonly used hydrotropes are sodium salts of short alkylbenzenesulfonates,
sodium salts of benzoates, aromatic fatty acids, urea, and nicotinamide. , Recently, ionic liquids, deep eutectic solvents, and biobased solvents
have been utilized as hydrotropes. − Most of the conventionally
used hydrotropes are inexpensive, stable, have low toxicity, and are
environmentally friendly, which makes them suitable for pharmaceutical
applications. While the hydrotropy
concept is easy to apply in practice, selecting
the most appropriate hydrotrope remains challenging due to the extensive
variety of available compounds. This selection process requires a
deep understanding of the hydrotropy mechanism, which has been debated
for many years. Hydrotrope-solute interactions have been studied using
various methods, such as NMR spectroscopy. , The selection of hydrotropes has relied on trial-and-error experimental
screening to identify those that exhibit strong API–hydrotrope
interactions. However, when the ability
of the hydrotrope to enhance API solubility in water is assessed,
the interactions among API–hydrotrope, API–water, and
hydrotrope–water should be considered. To the best of our knowledge,
no study has comprehensively investigated the influence of all interactions
among API, hydrotrope, and water on API solubility. From the
solid–liquid equilibrium (SLE) perspective, the
solubility of an API in the ternary API/hydrotrope/water system depends
on the melting properties of the pure components and their activity
coefficients in the liquid phase. The activity coefficients of components
in the liquid phase quantify the intermolecular interactions among
the API, hydrotrope, and water. Thermodynamic models, such as activity
coefficient models, can be used to describe the nonideality of the
liquid solution and calculate the activity coefficients of the components
in the liquid phase. Thermodynamic models are broadly classified as
correlative, predictive, or semipredictive. Correlative models, such
as Van Laar, Margules, non-random two-liquid (NRTL), and UNIQUAC, utilize experimental equilibrium
data to fit adjustable parameters and calculate solubility without
explicitly relying on the molecular structure of the components. Predictive
models, including Hansen, UNIFAC, and COSMO-RS, calculate
the activity coefficients to estimate solubility based on the molecular
structure and theoretical principles, thereby reducing reliance on
experimental data. Semipredictive models, such as PC-SAFT and NRTL-SAC, , integrate
aspects of both approaches by combining structural information with
limited experimental input. This work investigates the enhancement
of the API solubility in
water by the addition of a hydrotrope that is liquid at the temperature
of the solution, assuming hypothetical ternary systems composed of
a model API, a model hydrotrope, and water. The scenario with solid
excipients, including hydrotropes, has been discussed in our previous
publication. For the systematic study
in this work, a hypothetical ternary system composed of an API, hydrotrope,
and water was considered. The study focuses on understanding how the
nonideality of the liquid phase influences the solubility enhancement
of the API from a thermodynamic perspective. We considered all pairwise
molecular interactions within the system and analyzed the effect of
temperature on the solubility of the API. The Two-Suffix Margules
equation was used to model the nonideality of the hypothetical ternary
liquid solution. Theoretical analysis was validated using data from
the literature, applying the NRTL model, which enabled the analysis of binary interaction
and self-component energies in the system. This study aims to highlight
the role of thermodynamic modeling in guiding the selection of suitable
liquid hydrotropes and their concentrations in enhancing API solubility
in water.
Methods 2.1 Solid–Liquid Equilibrium SLE
data can be represented graphically by constructing a phase diagram. a illustrates the
solubility isotherms of the ternary system consisting of API (1),
hydrotrope (2), and water (3) at different temperatures. In specific
temperature ranges lower than the melting temperature of the hydrotrope,
where the hydrotrope is solid, ternary mixtures exhibit a eutectic
point “eu”, where the solubility lines of the API and
hydrotrope intersect. At higher temperatures, where the hydrotrope
is in its liquid state, there is no hydrotrope solubility line in
the ternary phase diagram. The isotherms represent only the API solubility
lines, indicating that the hydrotrope is completely miscible in aqueous
solution. b
shows a single solubility isotherm without a eutectic point (liquid
hydrotrope) and the phases formed in each region of the phase diagram.
At the solubility isotherm, solid API (S 1 ) is in equilibrium
with the liquid phase (L), which consists of API, hydrotrope, and
water. The API solubility line extends between points ″a″
and ″b″. These solubility lines intersect the ternary
phase diagram axis at the solubility of the API in the pure water
(point ″a″) or the pure hydrotrope (point ″b″)
at the temperature corresponding to the solubility line. This study
exclusively focuses on liquid hydrotropes at the solution temperature,
and thus, the presented solubility isotherms in the following sections
include only the API solubility line. 2.2 Modeling of Solid–Liquid Equilibria The solubility line of the API in b can be calculated using the following equation 1 where x 1 L and γ 1 L are the mole fraction and the
activity coefficient of the API in the liquid phase, respectively; T is the temperature; Δ h m and T m are the melting enthalpy and
temperature of pure API, respectively; Δ c p is the difference between the constant pressure heat capacity
of pure API in the solid and liquid states at T m ; and R is the universal gas constant. In
many cases, the Δ c p term (second
term on the right-hand side) has a smaller influence on the solubility
curve compared to the Δ h m term (first
term on the right-hand side). Thus, for
the sake of simplicity, the Δ c p term
was not considered in this study, and the following expression was
used to calculate the solubility line 2 The activity coefficients of the components
in the liquid phase were calculated using the Two-Suffix Margules
activity model as follows 3 Expressions for γ 2 L and γ 3 L are similar to
those for γ 1 L and can be derived
by swapping the indexes. To obtain γ 2 L , index 1 is replaced with 2, 2 with
3, and 3 with 1. To obtain γ 3 L , index 1 is replaced with 3, 3 with 2, and
2 with 1. In the ternary system API (1)/hydrotrope (2)/water (3),
the parameters A 12 , A 23 , and A 13 represent the
binary interaction parameters between the components. In addition
to the Two-Suffix Margules activity model, the NRTL
model was applied to calculate the binary interaction energy parameters
( U ) to determine the self-component binary energy
interactions 4 5 6 where A ij and A ji are the
binary interaction parameters between components i and j ; x i is the mole fraction of component i ; and
α ij denotes the nonrandomness factor.
If A ij is assumed to
equal A ji and the mixture
is assumed to be completely random (α ij = 0), the NRTL model can be simplified to the Two-Suffix Margules
activity model. The parameters U ij , U ji , U ii , and U jj denote
the binary interaction energy parameters between the molecules in
the system, including self-interactions. The value of U ij is always equal to that of U ji , indicating symmetry in
the interaction energies between pairs of molecules. However, A ij is not necessarily equal
to A ji . For the ternary
system, the binary interaction parameters are defined as follows 7 8 9 The NRTL model parameters were obtained
by fitting experimental
phase equilibria data of the respective systems using the following
objective function 10 where RMSD is the root-mean-square deviation,
measuring the discrepancy between the experimentally measured mole
fraction x 1 exp and the calculated mole fraction x 1 cal for n data points. 2.3 Solubility Enhancement Factor In
this study, solubility ( S ) was defined as the moles
of the API dissolved in moles of water at a specific temperature and
calculated using the following equation 11 where x 1 and x 3 are the mole fraction of API and water, respectively.
The API solubility enhancement due to the addition of the hydrotrope
was evaluated using the solubility enhancement factor (Φ), which
was defined as the ratio between the API solubility in the presence
of the hydrotrope and its solubility in the absence of the hydrotrope 12 where x 2 is the
hydrotrope mole fraction. According to and , Φ increases
with the increasing API mole fraction ( x 1 ) or with the decreasing water mole fraction ( x 3 ) in the solution. c shows an example of how Φ changes with the
change of the hydrotrope mole fraction ( x 2 ) at a constant temperature along the solubility isotherm. As ( x 3 ) approaches 0, Φ becomes infinity. d illustrates the
variation in API mole fraction ( x 1 ) along
the API solubility line relative to the API solubility in pure water
( x 0 ) (equivalent to point ″a″
in b) at constant
temperature. In this study, the figures illustrating the variation
of ( x 1 / x 0 )
with the hydrotrope mole fractions were used to show and discuss how
API solubility is influenced by the intermolecular interactions between
the components in the liquid solution at a specific temperature. The
representations with Φ were employed to display the variation
of the API solubility enhancement factor with the hydrotrope mole
fraction at different temperatures. The Φ presented in this
work was calculated at the point, where the hydrotrope mole fraction
is 0.5 ( x 2 = 0.5). This point provides
a balanced representation of the effect of the hydrotrope on API solubility,
avoiding extreme values, where Φ might become impractically
high or low. It also allows for consistent comparisons across different
hydrotropes and conditions. 2.4 Melting Properties of Pure Components The melting properties of a pure component are influenced by molecular
symmetry, conformational diversity, and intermolecular forces. At the melting point, the melting properties
are interrelated by the following equation 13 where Δ s m is the melting entropy. The selected melting properties are
typical for APIs and liquid hydrotropes, which have dissimilar molecular
structures. APIs are generally rigid molecules with ordered crystalline
structures, while hydrotropes exhibit less ordered crystal structures.
To represent an API with low solubility in water, a high melting temperature
(400 K) and entropy (Walden’s rule; Δ s m = 54.4 J mol –1 K –1 ) were assumed for the API; the melting
enthalpy of the API was calculated with (see ). The definition of Walden’s rule in this study
applies to ordered crystals of rigid molecules. The hydrotrope was assumed to have a melting entropy of
Δ s m = 28.1 J mol –1 K –1 to represent components with high solubility
in water. The hydrotrope melting temperature
of 285 K was assumed to simulate liquid hydrotropes at the solution
temperature and atmospheric pressure, with the melting enthalpy calculated
using .
Solid–Liquid Equilibrium SLE
data can be represented graphically by constructing a phase diagram. a illustrates the
solubility isotherms of the ternary system consisting of API (1),
hydrotrope (2), and water (3) at different temperatures. In specific
temperature ranges lower than the melting temperature of the hydrotrope,
where the hydrotrope is solid, ternary mixtures exhibit a eutectic
point “eu”, where the solubility lines of the API and
hydrotrope intersect. At higher temperatures, where the hydrotrope
is in its liquid state, there is no hydrotrope solubility line in
the ternary phase diagram. The isotherms represent only the API solubility
lines, indicating that the hydrotrope is completely miscible in aqueous
solution. b
shows a single solubility isotherm without a eutectic point (liquid
hydrotrope) and the phases formed in each region of the phase diagram.
At the solubility isotherm, solid API (S 1 ) is in equilibrium
with the liquid phase (L), which consists of API, hydrotrope, and
water. The API solubility line extends between points ″a″
and ″b″. These solubility lines intersect the ternary
phase diagram axis at the solubility of the API in the pure water
(point ″a″) or the pure hydrotrope (point ″b″)
at the temperature corresponding to the solubility line. This study
exclusively focuses on liquid hydrotropes at the solution temperature,
and thus, the presented solubility isotherms in the following sections
include only the API solubility line.
Modeling of Solid–Liquid Equilibria The solubility line of the API in b can be calculated using the following equation 1 where x 1 L and γ 1 L are the mole fraction and the
activity coefficient of the API in the liquid phase, respectively; T is the temperature; Δ h m and T m are the melting enthalpy and
temperature of pure API, respectively; Δ c p is the difference between the constant pressure heat capacity
of pure API in the solid and liquid states at T m ; and R is the universal gas constant. In
many cases, the Δ c p term (second
term on the right-hand side) has a smaller influence on the solubility
curve compared to the Δ h m term (first
term on the right-hand side). Thus, for
the sake of simplicity, the Δ c p term
was not considered in this study, and the following expression was
used to calculate the solubility line 2 The activity coefficients of the components
in the liquid phase were calculated using the Two-Suffix Margules
activity model as follows 3 Expressions for γ 2 L and γ 3 L are similar to
those for γ 1 L and can be derived
by swapping the indexes. To obtain γ 2 L , index 1 is replaced with 2, 2 with
3, and 3 with 1. To obtain γ 3 L , index 1 is replaced with 3, 3 with 2, and
2 with 1. In the ternary system API (1)/hydrotrope (2)/water (3),
the parameters A 12 , A 23 , and A 13 represent the
binary interaction parameters between the components. In addition
to the Two-Suffix Margules activity model, the NRTL
model was applied to calculate the binary interaction energy parameters
( U ) to determine the self-component binary energy
interactions 4 5 6 where A ij and A ji are the
binary interaction parameters between components i and j ; x i is the mole fraction of component i ; and
α ij denotes the nonrandomness factor.
If A ij is assumed to
equal A ji and the mixture
is assumed to be completely random (α ij = 0), the NRTL model can be simplified to the Two-Suffix Margules
activity model. The parameters U ij , U ji , U ii , and U jj denote
the binary interaction energy parameters between the molecules in
the system, including self-interactions. The value of U ij is always equal to that of U ji , indicating symmetry in
the interaction energies between pairs of molecules. However, A ij is not necessarily equal
to A ji . For the ternary
system, the binary interaction parameters are defined as follows 7 8 9 The NRTL model parameters were obtained
by fitting experimental
phase equilibria data of the respective systems using the following
objective function 10 where RMSD is the root-mean-square deviation,
measuring the discrepancy between the experimentally measured mole
fraction x 1 exp and the calculated mole fraction x 1 cal for n data points.
Solubility Enhancement Factor In
this study, solubility ( S ) was defined as the moles
of the API dissolved in moles of water at a specific temperature and
calculated using the following equation 11 where x 1 and x 3 are the mole fraction of API and water, respectively.
The API solubility enhancement due to the addition of the hydrotrope
was evaluated using the solubility enhancement factor (Φ), which
was defined as the ratio between the API solubility in the presence
of the hydrotrope and its solubility in the absence of the hydrotrope 12 where x 2 is the
hydrotrope mole fraction. According to and , Φ increases
with the increasing API mole fraction ( x 1 ) or with the decreasing water mole fraction ( x 3 ) in the solution. c shows an example of how Φ changes with the
change of the hydrotrope mole fraction ( x 2 ) at a constant temperature along the solubility isotherm. As ( x 3 ) approaches 0, Φ becomes infinity. d illustrates the
variation in API mole fraction ( x 1 ) along
the API solubility line relative to the API solubility in pure water
( x 0 ) (equivalent to point ″a″
in b) at constant
temperature. In this study, the figures illustrating the variation
of ( x 1 / x 0 )
with the hydrotrope mole fractions were used to show and discuss how
API solubility is influenced by the intermolecular interactions between
the components in the liquid solution at a specific temperature. The
representations with Φ were employed to display the variation
of the API solubility enhancement factor with the hydrotrope mole
fraction at different temperatures. The Φ presented in this
work was calculated at the point, where the hydrotrope mole fraction
is 0.5 ( x 2 = 0.5). This point provides
a balanced representation of the effect of the hydrotrope on API solubility,
avoiding extreme values, where Φ might become impractically
high or low. It also allows for consistent comparisons across different
hydrotropes and conditions.
Melting Properties of Pure Components The melting properties of a pure component are influenced by molecular
symmetry, conformational diversity, and intermolecular forces. At the melting point, the melting properties
are interrelated by the following equation 13 where Δ s m is the melting entropy. The selected melting properties are
typical for APIs and liquid hydrotropes, which have dissimilar molecular
structures. APIs are generally rigid molecules with ordered crystalline
structures, while hydrotropes exhibit less ordered crystal structures.
To represent an API with low solubility in water, a high melting temperature
(400 K) and entropy (Walden’s rule; Δ s m = 54.4 J mol –1 K –1 ) were assumed for the API; the melting
enthalpy of the API was calculated with (see ). The definition of Walden’s rule in this study
applies to ordered crystals of rigid molecules. The hydrotrope was assumed to have a melting entropy of
Δ s m = 28.1 J mol –1 K –1 to represent components with high solubility
in water. The hydrotrope melting temperature
of 285 K was assumed to simulate liquid hydrotropes at the solution
temperature and atmospheric pressure, with the melting enthalpy calculated
using .
Results and Discussion The Results
section is divided into two main subsections. In , the results
of a systematic study of the intermolecular interactions’ influence
on the API solubility determined with the Two-Suffix Margules model
for a hypothetical system are presented. The range in which the binary
interaction parameters were varied was selected to ensure a significant
and observable change in the API solubility. To validate the theoretical
findings from , the NRTL activity model was utilized with experimental data
from literature, allowing for a comprehensive
consideration of all molecular interactions in . 3.1 Theoretical Analysis of the Intermolecular
Interactions’ Influence on the API Solubility As mentioned
in the previous section, the API solubility lines vary depending on
the molecular interactions among the API, hydrotrope, and water in
the liquid solution. Consequently, the position of the maximum API
solubility point in the ternary phase diagram also varies. Selecting
different hydrotropes to enhance API solubility influences the behavior
of the API solubility lines, where the API mole fraction and the API
solubility enhancement factor can either increase or decrease to different
extents with the hydrotrope mole fraction. This section presents a
detailed analysis of the impact of the liquid phase nonideality on
the behavior of API solubility lines in the ternary phase diagrams
along with the resultant API solubility enhancement factors. The nonideality
of the solution was considered to investigate the influence of the
interactions among API–water ( A 13 ), hydrotrope–water ( A 23 ), and
API–hydrotrope ( A 12 ) on the solubility
of API. For this study, a temperature range from 300 to 325 K was
selected, as this is a commonly used range in the pharmaceutical industry
and includes the body temperature range. In this section, the values
of the binary interaction parameters were varied to account for different
scenarios, covering cases with positive and negative deviations from
the ideal solution behavior. A summary of the binary interaction parameters
( A ij ) used in to is provided in Table S1 . 3.1.1 Effect of API–Water Molecular Interactions
( A 13 ) on the API Solubility Enhancement At first, the impact of API–water interactions ( A 13 ) on the API solubility was explored. a shows the solubility
isotherms of the ternary API (1)/hydrotrope (2)/water (3) system at
305 K, varying A 13 from +2 to −2.
The binary interaction parameters A 12 and A 23 were assumed to be −0.5, indicating
favored interactions between the API–hydrotrope and hydrotrope–water.
As depicted in a, the API mole fraction changes along the API solubility lines,
starting from the API solubility in pure water (equivalent to point
″a″ in b). As the hydrotrope mole fraction increases in the liquid
solution, the API solubility either rises or decreases until it reaches
its solubility in the liquid hydrotrope (equivalent to point ″b″
in b). The
starting points (equivalent to ″a″) change with different A 13 values, indicating that the API solubility
in pure water varies. However, the end points remain constant, as
the API–hydrotrope interactions are unchanged. The behavior
of the API solubility lines changes with variations in the A 13 interaction parameter. b shows the corresponding ( x 1 / x 0 ) values along the solubility
isotherm at 305 K for the same A 13 values. As observed in b, if the API–water interactions are
significantly
stronger compared to API–hydrotrope and hydrotrope–water
interactions (green and light blue lines), ( x 1 / x 0 ) is less than 1 and decreases
as the hydrotrope mole fraction ( x 2 ) in
the liquid solution increases, indicating that adding hydrotrope does
not enhance API solubility in the liquid solution. Conversely,
when API–water interactions are less favored
(black, blue, and red lines), ( x 1 / x 0 ) increases with increasing ( x 2 ) and exceeds 1, signifying that the API solubility in
the solution increases with the addition of the hydrotropes. c depicts how API
activity coefficients in the liquid solution change relative to those
in pure water, i.e., γ 1 /γ 0 along
the solubility isotherm at 305 K, varying with different A 13 values. When API–water interactions are less
favored compared to API–hydrotrope and hydrotrope–water
interactions, (γ 1 /γ 0 ) decreases
as the hydrotrope mole fraction ( x 2 ) in
the liquid solution increases. Conversely, when API–water interactions
are more favored than hydrotrope–water and API–hydrotrope
interactions (green and light blue lines), (γ 1 /γ 0 ) increases with the increasing hydrotrope mole fraction.
The SLE equation demonstrates that as the activity coefficient of the API in the
liquid solution (γ 1 ) increases, the mole fraction
of the API in the liquid phase ( x 1 ) decreases
and vice versa. d illustrates
the influence of changing the solution temperature on Φ for x 2 = 0.5, assuming different A 13 values and A 12 = −0.5
and A 23 = −0.5. It is evident that
the temperature effect on Φ becomes more pronounced as the A 13 values increase. For instance, with A 13 = +2, Φ rises from approximately 7
to 10 when the temperature increases from 300 to 325 K. This suggests
that the less favorable the API-water interactions, the more sensitive
the API solubility is to temperature changes. On the other hand, for A 13 < – 1, Φ shows only a slight
increase with rising temperature. The SLE diagrams of the ternary
system calculated at different temperatures and assuming different A 13 values are provided in Figure S1 in the Supporting Information file. 3.1.2 Effect of Hydrotrope–Water Molecular
Interactions ( A 23 ) on the API Solubility
Enhancement Next, the influence of hydrotrope–water
interactions (the value of A 23 ) on API
solubility was investigated. a shows the calculated solubility isotherms of the
ternary system at 305 K when varying the A 23 value between +2.8 and −2 and keeping A 13 = +0.5 and A 12 = −0.5,
indicating unfavorable interactions between the API and water and
favored interactions between the API and hydrotrope. The behavior
of the API solubility lines between API solubility in pure water (equivalent
to point ″a″ in b) and API solubility in pure hydrotrope (equivalent
to point ″b″ in b) varies significantly with the strength of the hydrotrope–water
interactions. b,c shows the corresponding ( x 1 / x 0 ) and (γ 1 /γ 0 ) values along the solubility isotherm at 305 K for various A 23 values, respectively. When the hydrotrope–water interactions are
weaker and less
favored than API–water interactions and API–hydrotrope
interactions (green and light blue lines), ( x 1 / x 0 ) increases along the API solubility
lines. This rise continues until the API solubility in the solution
approaches its maximum. Beyond this point, ( x 1 / x 0 ) decreases to the solubility
in pure hydrotrope (equivalent to point ″b″ in b). This indicates
that weaker hydrotrope–water interactions result in higher
API solubility in the ternary mixture compared to API solubility in
the pure hydrotrope. The effect on solubility enhancement decreases
as the hydrotrope–water
interactions become slightly stronger but are still less favorable
than the API–water and API–hydrotrope interactions (purple
line). The hydrotrope concentrations, where API mole fractions
are highest
in the ternary solution, correspond to those where API activity coefficients
are lowest and vice versa. For weaker hydrotrope–water interactions
compared to API–hydrotrope but stronger than API–water
interactions (black line), ( x 1 / x 0 ) increases and (γ 1 /γ 0 ) decreases nearly linearly with hydrotrope mole fraction. Conversely, when the hydrotrope–water interactions are strong
and more favored than the API–water and API–hydrotrope
interactions (red and dark blue lines), ( x 1 / x 0 ) initially remains constant or decreases
at low hydrotrope concentrations. However, with increasing hydrotrope
concentration, ( x 1 / x 0 ) starts to increase again until it reaches the maximum API
solubility in the liquid hydrotrope. The corresponding (γ 1 /γ 0 ) values display a mirror image behavior
to ( x 1 / x 0 )
values. d depicts
the effect of varying the solution temperature on Φ when x 2 = 0.5, considering various A 23 values, with A 12 = −0.5
and A 13 = +0.5. It is evident that the
temperature impact on Φ becomes more significant as the A 23 values increase. For instance, when A 23 = +2.8, Φ rises from approximately
7 to 10 as the temperature increases from 300 to 325 K. Conversely,
for A 23 < – 1, Φ only
increases slightly from about 2.25 to 3 within the same temperature
range. The SLE diagrams of the ternary system calculated at different
temperatures and assuming different A 23 values are reported in Figure S2 in the
Supporting Information file. 3.1.3 Effect of API–Hydrotrope Molecular
Interactions ( A 12 ) on the API Solubility
Enhancement Lastly, the influence of the API–hydrotrope
interactions (the value of A 12 ) on the
API solubility was explored. a shows the calculated solubility isotherms of the
ternary API (1)/hydrotrope (2)/water (3) system at 305 K when varying
the value of A 12 from +2 to −2
and assuming A 13 = +0.5 and A 23 = −0.5, indicating unfavorable interactions
between the API and water and favored interactions between the hydrotrope
and water. As depicted in a, the API solubility lines begin from the same point
since the API–water interactions are constant, but the end
point varies with different API–hydrotrope interactions. b,c displays the
corresponding ( x 1 / x 0 ) and (γ 1 /γ 0 ) values calculated
along the solubility isotherms at 305 K obtained for different A 12 values, respectively. If the API–hydrotrope interactions are more
favored than
hydrotrope–water and API–water interactions (light blue
and green lines), ( x 1 / x 0 ) increases as the mole fraction of the hydrotrope ( x 2 ) in the liquid solution increases. However,
as the API–hydrotrope interactions become weaker (black line),
the effect on solubility enhancement decreases. The corresponding
(γ 1 /γ 0 ) decreases as the hydrotrope
mole fraction ( x 2 ) in the liquid solution
increases. In contrast, when API–hydrotrope interactions
are unfavorable
(the red and dark blue lines), the ( x 1 / x 0 ) value becomes less than 1, indicating
that the API solubility in the liquid solution is lower than that
in pure water. The corresponding (γ 1 /γ 0 ) values increases as the hydrotrope mole fraction ( x 2 ) in the liquid solution increases. d illustrates
the impact of changing the solution temperature on Φ at x 2 = 0.5, considering different A 12 values and A 23 = −0.5
and A 13 = +0.5. As seen in d, the influence of temperature
on Φ increases as the A 12 value
decreases. For instance, when the solution temperature increases from
300 to 325 K, Φ rises from around 8.8 to 18 when A 12 = −2. Conversely, when A 12 > 0, Φ slightly change with an increase in temperature
from 300 to 325 K. Additionally, the temperature impact on Φ
is significantly larger when changing the A 12 compared to changing A 13 and A 23 , as seen in d and d, respectively.
The SLE diagrams of the ternary system calculated at different temperatures
and assuming that different A 12 values
are depicted in Figure S3 in the Supporting
Information file. 3.2 Analysis of Molecular Interactions in Real
Ternary Systems: a Case-Study of 1,2-Alkanediols The theoretical
analysis in the previous section demonstrated that API solubility
in the ternary solution significantly increases as API–hydrotrope
interactions strengthen. Conversely, if hydrotrope–water interactions
exceed API–hydrotrope interactions, then API solubility decreases.
This effect becomes more pronounced as API–water interactions
become less favored. In this section, the theoretical findings on
the impact of molecular interactions on API solubility in the ternary
API (1)/hydrotrope (2)/water (3) system will be validated using experimental
data from the literature. Syringic acid,
known for its antioxidant, antimicrobial, anti-inflammatory, anticancer,
and antidiabetic properties, represents API (1). The melting temperature
and melting enthalpy of syringic acid are 482.5 K and 28.1 kJ mol –1 , respectively, while
melting entropy calculated using is 58.2 J mol –1 K –1 . In the work of Abranches et al., the
solubility of syringic acid was measured in water (3) in the presence
of different 1,2-alkanediols as hydrotropes (2): 1,2-ethanediol, 1,2-propanediol,
1,2-butanediol, 1,2-pentanediol, and 1,2-hexanediol, at 303.2 K. The
experimental data were converted into mole fractions, leading to differences
in the shape and maximum points of the solubility lines to those in
the original article. The molecular structures of syringic acid and
the studied 1,2-alkanediols are provided in Table S2 of the Supporting Information file. presents the calculated interaction
energy parameters ( U ) and the corresponding interaction
parameters ( A ) using the NRTL model by fitting the
experimental phase equilibria data, as described in . As shown in , increasing the alkyl chain length from 1,2-ethanediol to 1,2-pentanediol,
( U 12 ) becomes more negative, indicating
a more favored interaction (more negative energy) between syringic
acid and the hydrotrope. However, from 1,2-pentanediol to 1,2-hexanediol,
there is a slight increase (less negative), indicating that the interaction
strength stabilizes or slightly weakens at this point. The increasing
hydrophobicity of the longer alkyl chains likely enhances the affinity
of the hydrotrope for syringic acid due to stronger nonpolar interactions.
However, this effect plateaus beyond a specific chain length (between
pentanediol and hexanediol), possibly due to steric hindrance or solubility
limitations. As the alkyl chain length increases, the ( U 23 ) values become more positive, indicating
that hydrotrope–water
interactions are less favored. This occurs because longer alkyl chains
are more hydrophobic, making it more difficult for the hydrotrope
to interact with water, a polar solvent. Interestingly, 1,2-pentanediol,
slightly shorter, demonstrated the strongest water-repelling effect
in the series. In contrast, the increase in the chain length of 1,2-hexanediol
may allow the molecule to fold or orient in a way that maintains some
level of interaction with water. The self-interaction energy
values of hydrotropes ( U 22 ) decrease from
positive to negative as the alkyl chain
length increases, indicating a transition from weaker to stronger
self-interactions among the hydrotropes with the increasing alkyl
chain length. Shorter chains exhibit positive values of U 22 due to their lower hydrophobicity, leading to more
interactions with the surrounding solvent. In contrast, longer chains
become more hydrophobic, promoting stronger nonpolar interactions
among hydrotrope molecules and resulting in negative ( U 22 ) values that reflect enhanced hydrotrope self-aggregation.
A sharp increase in ( U 22 ) values for 1,2-pentanediol
and 1,2-hexanediol, compared to shorter-chain 1,2-alkanediols, may
be attributed to a threshold effect in hydrophobicity, where the longer
alkyl chains lead to a significant increase in the self-aggregation
of the hydrotrope molecules. The interaction energy values between
syringic acid and water ( U 13 ), the self-interaction
energy of syringic
acid ( U 11 ), and the self-interaction energy
of water ( U 33 ) remain constant for all
1,2-alkanediols, as expected, since these two components do not change
in the studied ternary systems. a illustrates
the corresponding ( x 1 / x 0 ) values calculated along the solubility isotherms at
303.2 K for all of the studied systems. In all cases, the ( x 1 / x 0 ) values rise
as the mole fraction of the hydrotrope ( x 2 ) in the liquid solution increases. The selection of different 1,2-alkanediols
significantly affects the solubility of syringic acid in water. Choosing
1,2-alkanediols with longer alkyl chain lengths results in a more
significant enhancement of syringic acid solubility. This can be attributed
to the more favored interaction energies between the hydrotrope and
syringic acid and weaker interactions between the hydrotrope and water.
For higher 1,2-alkanediols (1,2-pentanediol and 1,2-hexanediol), a
smaller amount of hydrotrope is required to achieve an 80-fold enhancement
in syringic acid solubility ( x 1 / x 0 ), approximately half of the maximum solubility
enhancement observed in the experiments. b illustrates
the syringic acid activity coefficients in the liquid solution relative
to those in pure water (γ 1 /γ 0 ) along
the solubility isotherm at 303.2 K. The corresponding (γ 1 /γ 0 ) values decrease as the hydrotrope mole
fraction ( x 2 ) in the liquid solution increases.
Furthermore, the (γ 1 /γ 0 ) values
decrease with an increase in the 1,2-alkanediol chain length, except
for 1,2-hexanediol, as discussed previously. In addition to the syringic
acid activity coefficients, the activity coefficients of 1,2-alkanediols
and water in the ternary system were calculated, as shown in Figure S4a,b in the Supporting Information file.
As the alkyl chain length increases, the (γ 2 /γ 0 ) and (γ 3 /γ 0 ) values rise
from 1,2-ethanediol to 1,2-pentanediol, indicating decreased solubility
of both the hydrotropes and water in the ternary system (higher activity
coefficients), which corresponds to increasing solubility of syringic
acid in the same order. However, 1,2-hexanediol deviates from this
trend, exhibiting lower (γ 2 /γ 0 )
and (γ 3 /γ 0 ) values than those of
the other hydrotropes, which could also explain the decreased syringic
acid solubility. Note that syringic acid solubility follows the expected
trend in the dilute range, but 1,2-hexanediol deviates as hydrotrope–water
interactions become more dominant with the increasing hydrotrope concentration,
reversing beyond ( x 2 = 0.1).
Theoretical Analysis of the Intermolecular
Interactions’ Influence on the API Solubility As mentioned
in the previous section, the API solubility lines vary depending on
the molecular interactions among the API, hydrotrope, and water in
the liquid solution. Consequently, the position of the maximum API
solubility point in the ternary phase diagram also varies. Selecting
different hydrotropes to enhance API solubility influences the behavior
of the API solubility lines, where the API mole fraction and the API
solubility enhancement factor can either increase or decrease to different
extents with the hydrotrope mole fraction. This section presents a
detailed analysis of the impact of the liquid phase nonideality on
the behavior of API solubility lines in the ternary phase diagrams
along with the resultant API solubility enhancement factors. The nonideality
of the solution was considered to investigate the influence of the
interactions among API–water ( A 13 ), hydrotrope–water ( A 23 ), and
API–hydrotrope ( A 12 ) on the solubility
of API. For this study, a temperature range from 300 to 325 K was
selected, as this is a commonly used range in the pharmaceutical industry
and includes the body temperature range. In this section, the values
of the binary interaction parameters were varied to account for different
scenarios, covering cases with positive and negative deviations from
the ideal solution behavior. A summary of the binary interaction parameters
( A ij ) used in to is provided in Table S1 . 3.1.1 Effect of API–Water Molecular Interactions
( A 13 ) on the API Solubility Enhancement At first, the impact of API–water interactions ( A 13 ) on the API solubility was explored. a shows the solubility
isotherms of the ternary API (1)/hydrotrope (2)/water (3) system at
305 K, varying A 13 from +2 to −2.
The binary interaction parameters A 12 and A 23 were assumed to be −0.5, indicating
favored interactions between the API–hydrotrope and hydrotrope–water.
As depicted in a, the API mole fraction changes along the API solubility lines,
starting from the API solubility in pure water (equivalent to point
″a″ in b). As the hydrotrope mole fraction increases in the liquid
solution, the API solubility either rises or decreases until it reaches
its solubility in the liquid hydrotrope (equivalent to point ″b″
in b). The
starting points (equivalent to ″a″) change with different A 13 values, indicating that the API solubility
in pure water varies. However, the end points remain constant, as
the API–hydrotrope interactions are unchanged. The behavior
of the API solubility lines changes with variations in the A 13 interaction parameter. b shows the corresponding ( x 1 / x 0 ) values along the solubility
isotherm at 305 K for the same A 13 values. As observed in b, if the API–water interactions are
significantly
stronger compared to API–hydrotrope and hydrotrope–water
interactions (green and light blue lines), ( x 1 / x 0 ) is less than 1 and decreases
as the hydrotrope mole fraction ( x 2 ) in
the liquid solution increases, indicating that adding hydrotrope does
not enhance API solubility in the liquid solution. Conversely,
when API–water interactions are less favored
(black, blue, and red lines), ( x 1 / x 0 ) increases with increasing ( x 2 ) and exceeds 1, signifying that the API solubility in
the solution increases with the addition of the hydrotropes. c depicts how API
activity coefficients in the liquid solution change relative to those
in pure water, i.e., γ 1 /γ 0 along
the solubility isotherm at 305 K, varying with different A 13 values. When API–water interactions are less
favored compared to API–hydrotrope and hydrotrope–water
interactions, (γ 1 /γ 0 ) decreases
as the hydrotrope mole fraction ( x 2 ) in
the liquid solution increases. Conversely, when API–water interactions
are more favored than hydrotrope–water and API–hydrotrope
interactions (green and light blue lines), (γ 1 /γ 0 ) increases with the increasing hydrotrope mole fraction.
The SLE equation demonstrates that as the activity coefficient of the API in the
liquid solution (γ 1 ) increases, the mole fraction
of the API in the liquid phase ( x 1 ) decreases
and vice versa. d illustrates
the influence of changing the solution temperature on Φ for x 2 = 0.5, assuming different A 13 values and A 12 = −0.5
and A 23 = −0.5. It is evident that
the temperature effect on Φ becomes more pronounced as the A 13 values increase. For instance, with A 13 = +2, Φ rises from approximately 7
to 10 when the temperature increases from 300 to 325 K. This suggests
that the less favorable the API-water interactions, the more sensitive
the API solubility is to temperature changes. On the other hand, for A 13 < – 1, Φ shows only a slight
increase with rising temperature. The SLE diagrams of the ternary
system calculated at different temperatures and assuming different A 13 values are provided in Figure S1 in the Supporting Information file. 3.1.2 Effect of Hydrotrope–Water Molecular
Interactions ( A 23 ) on the API Solubility
Enhancement Next, the influence of hydrotrope–water
interactions (the value of A 23 ) on API
solubility was investigated. a shows the calculated solubility isotherms of the
ternary system at 305 K when varying the A 23 value between +2.8 and −2 and keeping A 13 = +0.5 and A 12 = −0.5,
indicating unfavorable interactions between the API and water and
favored interactions between the API and hydrotrope. The behavior
of the API solubility lines between API solubility in pure water (equivalent
to point ″a″ in b) and API solubility in pure hydrotrope (equivalent
to point ″b″ in b) varies significantly with the strength of the hydrotrope–water
interactions. b,c shows the corresponding ( x 1 / x 0 ) and (γ 1 /γ 0 ) values along the solubility isotherm at 305 K for various A 23 values, respectively. When the hydrotrope–water interactions are
weaker and less
favored than API–water interactions and API–hydrotrope
interactions (green and light blue lines), ( x 1 / x 0 ) increases along the API solubility
lines. This rise continues until the API solubility in the solution
approaches its maximum. Beyond this point, ( x 1 / x 0 ) decreases to the solubility
in pure hydrotrope (equivalent to point ″b″ in b). This indicates
that weaker hydrotrope–water interactions result in higher
API solubility in the ternary mixture compared to API solubility in
the pure hydrotrope. The effect on solubility enhancement decreases
as the hydrotrope–water
interactions become slightly stronger but are still less favorable
than the API–water and API–hydrotrope interactions (purple
line). The hydrotrope concentrations, where API mole fractions
are highest
in the ternary solution, correspond to those where API activity coefficients
are lowest and vice versa. For weaker hydrotrope–water interactions
compared to API–hydrotrope but stronger than API–water
interactions (black line), ( x 1 / x 0 ) increases and (γ 1 /γ 0 ) decreases nearly linearly with hydrotrope mole fraction. Conversely, when the hydrotrope–water interactions are strong
and more favored than the API–water and API–hydrotrope
interactions (red and dark blue lines), ( x 1 / x 0 ) initially remains constant or decreases
at low hydrotrope concentrations. However, with increasing hydrotrope
concentration, ( x 1 / x 0 ) starts to increase again until it reaches the maximum API
solubility in the liquid hydrotrope. The corresponding (γ 1 /γ 0 ) values display a mirror image behavior
to ( x 1 / x 0 )
values. d depicts
the effect of varying the solution temperature on Φ when x 2 = 0.5, considering various A 23 values, with A 12 = −0.5
and A 13 = +0.5. It is evident that the
temperature impact on Φ becomes more significant as the A 23 values increase. For instance, when A 23 = +2.8, Φ rises from approximately
7 to 10 as the temperature increases from 300 to 325 K. Conversely,
for A 23 < – 1, Φ only
increases slightly from about 2.25 to 3 within the same temperature
range. The SLE diagrams of the ternary system calculated at different
temperatures and assuming different A 23 values are reported in Figure S2 in the
Supporting Information file. 3.1.3 Effect of API–Hydrotrope Molecular
Interactions ( A 12 ) on the API Solubility
Enhancement Lastly, the influence of the API–hydrotrope
interactions (the value of A 12 ) on the
API solubility was explored. a shows the calculated solubility isotherms of the
ternary API (1)/hydrotrope (2)/water (3) system at 305 K when varying
the value of A 12 from +2 to −2
and assuming A 13 = +0.5 and A 23 = −0.5, indicating unfavorable interactions
between the API and water and favored interactions between the hydrotrope
and water. As depicted in a, the API solubility lines begin from the same point
since the API–water interactions are constant, but the end
point varies with different API–hydrotrope interactions. b,c displays the
corresponding ( x 1 / x 0 ) and (γ 1 /γ 0 ) values calculated
along the solubility isotherms at 305 K obtained for different A 12 values, respectively. If the API–hydrotrope interactions are more
favored than
hydrotrope–water and API–water interactions (light blue
and green lines), ( x 1 / x 0 ) increases as the mole fraction of the hydrotrope ( x 2 ) in the liquid solution increases. However,
as the API–hydrotrope interactions become weaker (black line),
the effect on solubility enhancement decreases. The corresponding
(γ 1 /γ 0 ) decreases as the hydrotrope
mole fraction ( x 2 ) in the liquid solution
increases. In contrast, when API–hydrotrope interactions
are unfavorable
(the red and dark blue lines), the ( x 1 / x 0 ) value becomes less than 1, indicating
that the API solubility in the liquid solution is lower than that
in pure water. The corresponding (γ 1 /γ 0 ) values increases as the hydrotrope mole fraction ( x 2 ) in the liquid solution increases. d illustrates
the impact of changing the solution temperature on Φ at x 2 = 0.5, considering different A 12 values and A 23 = −0.5
and A 13 = +0.5. As seen in d, the influence of temperature
on Φ increases as the A 12 value
decreases. For instance, when the solution temperature increases from
300 to 325 K, Φ rises from around 8.8 to 18 when A 12 = −2. Conversely, when A 12 > 0, Φ slightly change with an increase in temperature
from 300 to 325 K. Additionally, the temperature impact on Φ
is significantly larger when changing the A 12 compared to changing A 13 and A 23 , as seen in d and d, respectively.
The SLE diagrams of the ternary system calculated at different temperatures
and assuming that different A 12 values
are depicted in Figure S3 in the Supporting
Information file.
Effect of API–Water Molecular Interactions
( A 13 ) on the API Solubility Enhancement At first, the impact of API–water interactions ( A 13 ) on the API solubility was explored. a shows the solubility
isotherms of the ternary API (1)/hydrotrope (2)/water (3) system at
305 K, varying A 13 from +2 to −2.
The binary interaction parameters A 12 and A 23 were assumed to be −0.5, indicating
favored interactions between the API–hydrotrope and hydrotrope–water.
As depicted in a, the API mole fraction changes along the API solubility lines,
starting from the API solubility in pure water (equivalent to point
″a″ in b). As the hydrotrope mole fraction increases in the liquid
solution, the API solubility either rises or decreases until it reaches
its solubility in the liquid hydrotrope (equivalent to point ″b″
in b). The
starting points (equivalent to ″a″) change with different A 13 values, indicating that the API solubility
in pure water varies. However, the end points remain constant, as
the API–hydrotrope interactions are unchanged. The behavior
of the API solubility lines changes with variations in the A 13 interaction parameter. b shows the corresponding ( x 1 / x 0 ) values along the solubility
isotherm at 305 K for the same A 13 values. As observed in b, if the API–water interactions are
significantly
stronger compared to API–hydrotrope and hydrotrope–water
interactions (green and light blue lines), ( x 1 / x 0 ) is less than 1 and decreases
as the hydrotrope mole fraction ( x 2 ) in
the liquid solution increases, indicating that adding hydrotrope does
not enhance API solubility in the liquid solution. Conversely,
when API–water interactions are less favored
(black, blue, and red lines), ( x 1 / x 0 ) increases with increasing ( x 2 ) and exceeds 1, signifying that the API solubility in
the solution increases with the addition of the hydrotropes. c depicts how API
activity coefficients in the liquid solution change relative to those
in pure water, i.e., γ 1 /γ 0 along
the solubility isotherm at 305 K, varying with different A 13 values. When API–water interactions are less
favored compared to API–hydrotrope and hydrotrope–water
interactions, (γ 1 /γ 0 ) decreases
as the hydrotrope mole fraction ( x 2 ) in
the liquid solution increases. Conversely, when API–water interactions
are more favored than hydrotrope–water and API–hydrotrope
interactions (green and light blue lines), (γ 1 /γ 0 ) increases with the increasing hydrotrope mole fraction.
The SLE equation demonstrates that as the activity coefficient of the API in the
liquid solution (γ 1 ) increases, the mole fraction
of the API in the liquid phase ( x 1 ) decreases
and vice versa. d illustrates
the influence of changing the solution temperature on Φ for x 2 = 0.5, assuming different A 13 values and A 12 = −0.5
and A 23 = −0.5. It is evident that
the temperature effect on Φ becomes more pronounced as the A 13 values increase. For instance, with A 13 = +2, Φ rises from approximately 7
to 10 when the temperature increases from 300 to 325 K. This suggests
that the less favorable the API-water interactions, the more sensitive
the API solubility is to temperature changes. On the other hand, for A 13 < – 1, Φ shows only a slight
increase with rising temperature. The SLE diagrams of the ternary
system calculated at different temperatures and assuming different A 13 values are provided in Figure S1 in the Supporting Information file.
Effect of Hydrotrope–Water Molecular
Interactions ( A 23 ) on the API Solubility
Enhancement Next, the influence of hydrotrope–water
interactions (the value of A 23 ) on API
solubility was investigated. a shows the calculated solubility isotherms of the
ternary system at 305 K when varying the A 23 value between +2.8 and −2 and keeping A 13 = +0.5 and A 12 = −0.5,
indicating unfavorable interactions between the API and water and
favored interactions between the API and hydrotrope. The behavior
of the API solubility lines between API solubility in pure water (equivalent
to point ″a″ in b) and API solubility in pure hydrotrope (equivalent
to point ″b″ in b) varies significantly with the strength of the hydrotrope–water
interactions. b,c shows the corresponding ( x 1 / x 0 ) and (γ 1 /γ 0 ) values along the solubility isotherm at 305 K for various A 23 values, respectively. When the hydrotrope–water interactions are
weaker and less
favored than API–water interactions and API–hydrotrope
interactions (green and light blue lines), ( x 1 / x 0 ) increases along the API solubility
lines. This rise continues until the API solubility in the solution
approaches its maximum. Beyond this point, ( x 1 / x 0 ) decreases to the solubility
in pure hydrotrope (equivalent to point ″b″ in b). This indicates
that weaker hydrotrope–water interactions result in higher
API solubility in the ternary mixture compared to API solubility in
the pure hydrotrope. The effect on solubility enhancement decreases
as the hydrotrope–water
interactions become slightly stronger but are still less favorable
than the API–water and API–hydrotrope interactions (purple
line). The hydrotrope concentrations, where API mole fractions
are highest
in the ternary solution, correspond to those where API activity coefficients
are lowest and vice versa. For weaker hydrotrope–water interactions
compared to API–hydrotrope but stronger than API–water
interactions (black line), ( x 1 / x 0 ) increases and (γ 1 /γ 0 ) decreases nearly linearly with hydrotrope mole fraction. Conversely, when the hydrotrope–water interactions are strong
and more favored than the API–water and API–hydrotrope
interactions (red and dark blue lines), ( x 1 / x 0 ) initially remains constant or decreases
at low hydrotrope concentrations. However, with increasing hydrotrope
concentration, ( x 1 / x 0 ) starts to increase again until it reaches the maximum API
solubility in the liquid hydrotrope. The corresponding (γ 1 /γ 0 ) values display a mirror image behavior
to ( x 1 / x 0 )
values. d depicts
the effect of varying the solution temperature on Φ when x 2 = 0.5, considering various A 23 values, with A 12 = −0.5
and A 13 = +0.5. It is evident that the
temperature impact on Φ becomes more significant as the A 23 values increase. For instance, when A 23 = +2.8, Φ rises from approximately
7 to 10 as the temperature increases from 300 to 325 K. Conversely,
for A 23 < – 1, Φ only
increases slightly from about 2.25 to 3 within the same temperature
range. The SLE diagrams of the ternary system calculated at different
temperatures and assuming different A 23 values are reported in Figure S2 in the
Supporting Information file.
Effect of API–Hydrotrope Molecular
Interactions ( A 12 ) on the API Solubility
Enhancement Lastly, the influence of the API–hydrotrope
interactions (the value of A 12 ) on the
API solubility was explored. a shows the calculated solubility isotherms of the
ternary API (1)/hydrotrope (2)/water (3) system at 305 K when varying
the value of A 12 from +2 to −2
and assuming A 13 = +0.5 and A 23 = −0.5, indicating unfavorable interactions
between the API and water and favored interactions between the hydrotrope
and water. As depicted in a, the API solubility lines begin from the same point
since the API–water interactions are constant, but the end
point varies with different API–hydrotrope interactions. b,c displays the
corresponding ( x 1 / x 0 ) and (γ 1 /γ 0 ) values calculated
along the solubility isotherms at 305 K obtained for different A 12 values, respectively. If the API–hydrotrope interactions are more
favored than
hydrotrope–water and API–water interactions (light blue
and green lines), ( x 1 / x 0 ) increases as the mole fraction of the hydrotrope ( x 2 ) in the liquid solution increases. However,
as the API–hydrotrope interactions become weaker (black line),
the effect on solubility enhancement decreases. The corresponding
(γ 1 /γ 0 ) decreases as the hydrotrope
mole fraction ( x 2 ) in the liquid solution
increases. In contrast, when API–hydrotrope interactions
are unfavorable
(the red and dark blue lines), the ( x 1 / x 0 ) value becomes less than 1, indicating
that the API solubility in the liquid solution is lower than that
in pure water. The corresponding (γ 1 /γ 0 ) values increases as the hydrotrope mole fraction ( x 2 ) in the liquid solution increases. d illustrates
the impact of changing the solution temperature on Φ at x 2 = 0.5, considering different A 12 values and A 23 = −0.5
and A 13 = +0.5. As seen in d, the influence of temperature
on Φ increases as the A 12 value
decreases. For instance, when the solution temperature increases from
300 to 325 K, Φ rises from around 8.8 to 18 when A 12 = −2. Conversely, when A 12 > 0, Φ slightly change with an increase in temperature
from 300 to 325 K. Additionally, the temperature impact on Φ
is significantly larger when changing the A 12 compared to changing A 13 and A 23 , as seen in d and d, respectively.
The SLE diagrams of the ternary system calculated at different temperatures
and assuming that different A 12 values
are depicted in Figure S3 in the Supporting
Information file.
Analysis of Molecular Interactions in Real
Ternary Systems: a Case-Study of 1,2-Alkanediols The theoretical
analysis in the previous section demonstrated that API solubility
in the ternary solution significantly increases as API–hydrotrope
interactions strengthen. Conversely, if hydrotrope–water interactions
exceed API–hydrotrope interactions, then API solubility decreases.
This effect becomes more pronounced as API–water interactions
become less favored. In this section, the theoretical findings on
the impact of molecular interactions on API solubility in the ternary
API (1)/hydrotrope (2)/water (3) system will be validated using experimental
data from the literature. Syringic acid,
known for its antioxidant, antimicrobial, anti-inflammatory, anticancer,
and antidiabetic properties, represents API (1). The melting temperature
and melting enthalpy of syringic acid are 482.5 K and 28.1 kJ mol –1 , respectively, while
melting entropy calculated using is 58.2 J mol –1 K –1 . In the work of Abranches et al., the
solubility of syringic acid was measured in water (3) in the presence
of different 1,2-alkanediols as hydrotropes (2): 1,2-ethanediol, 1,2-propanediol,
1,2-butanediol, 1,2-pentanediol, and 1,2-hexanediol, at 303.2 K. The
experimental data were converted into mole fractions, leading to differences
in the shape and maximum points of the solubility lines to those in
the original article. The molecular structures of syringic acid and
the studied 1,2-alkanediols are provided in Table S2 of the Supporting Information file. presents the calculated interaction
energy parameters ( U ) and the corresponding interaction
parameters ( A ) using the NRTL model by fitting the
experimental phase equilibria data, as described in . As shown in , increasing the alkyl chain length from 1,2-ethanediol to 1,2-pentanediol,
( U 12 ) becomes more negative, indicating
a more favored interaction (more negative energy) between syringic
acid and the hydrotrope. However, from 1,2-pentanediol to 1,2-hexanediol,
there is a slight increase (less negative), indicating that the interaction
strength stabilizes or slightly weakens at this point. The increasing
hydrophobicity of the longer alkyl chains likely enhances the affinity
of the hydrotrope for syringic acid due to stronger nonpolar interactions.
However, this effect plateaus beyond a specific chain length (between
pentanediol and hexanediol), possibly due to steric hindrance or solubility
limitations. As the alkyl chain length increases, the ( U 23 ) values become more positive, indicating
that hydrotrope–water
interactions are less favored. This occurs because longer alkyl chains
are more hydrophobic, making it more difficult for the hydrotrope
to interact with water, a polar solvent. Interestingly, 1,2-pentanediol,
slightly shorter, demonstrated the strongest water-repelling effect
in the series. In contrast, the increase in the chain length of 1,2-hexanediol
may allow the molecule to fold or orient in a way that maintains some
level of interaction with water. The self-interaction energy
values of hydrotropes ( U 22 ) decrease from
positive to negative as the alkyl chain
length increases, indicating a transition from weaker to stronger
self-interactions among the hydrotropes with the increasing alkyl
chain length. Shorter chains exhibit positive values of U 22 due to their lower hydrophobicity, leading to more
interactions with the surrounding solvent. In contrast, longer chains
become more hydrophobic, promoting stronger nonpolar interactions
among hydrotrope molecules and resulting in negative ( U 22 ) values that reflect enhanced hydrotrope self-aggregation.
A sharp increase in ( U 22 ) values for 1,2-pentanediol
and 1,2-hexanediol, compared to shorter-chain 1,2-alkanediols, may
be attributed to a threshold effect in hydrophobicity, where the longer
alkyl chains lead to a significant increase in the self-aggregation
of the hydrotrope molecules. The interaction energy values between
syringic acid and water ( U 13 ), the self-interaction
energy of syringic
acid ( U 11 ), and the self-interaction energy
of water ( U 33 ) remain constant for all
1,2-alkanediols, as expected, since these two components do not change
in the studied ternary systems. a illustrates
the corresponding ( x 1 / x 0 ) values calculated along the solubility isotherms at
303.2 K for all of the studied systems. In all cases, the ( x 1 / x 0 ) values rise
as the mole fraction of the hydrotrope ( x 2 ) in the liquid solution increases. The selection of different 1,2-alkanediols
significantly affects the solubility of syringic acid in water. Choosing
1,2-alkanediols with longer alkyl chain lengths results in a more
significant enhancement of syringic acid solubility. This can be attributed
to the more favored interaction energies between the hydrotrope and
syringic acid and weaker interactions between the hydrotrope and water.
For higher 1,2-alkanediols (1,2-pentanediol and 1,2-hexanediol), a
smaller amount of hydrotrope is required to achieve an 80-fold enhancement
in syringic acid solubility ( x 1 / x 0 ), approximately half of the maximum solubility
enhancement observed in the experiments. b illustrates
the syringic acid activity coefficients in the liquid solution relative
to those in pure water (γ 1 /γ 0 ) along
the solubility isotherm at 303.2 K. The corresponding (γ 1 /γ 0 ) values decrease as the hydrotrope mole
fraction ( x 2 ) in the liquid solution increases.
Furthermore, the (γ 1 /γ 0 ) values
decrease with an increase in the 1,2-alkanediol chain length, except
for 1,2-hexanediol, as discussed previously. In addition to the syringic
acid activity coefficients, the activity coefficients of 1,2-alkanediols
and water in the ternary system were calculated, as shown in Figure S4a,b in the Supporting Information file.
As the alkyl chain length increases, the (γ 2 /γ 0 ) and (γ 3 /γ 0 ) values rise
from 1,2-ethanediol to 1,2-pentanediol, indicating decreased solubility
of both the hydrotropes and water in the ternary system (higher activity
coefficients), which corresponds to increasing solubility of syringic
acid in the same order. However, 1,2-hexanediol deviates from this
trend, exhibiting lower (γ 2 /γ 0 )
and (γ 3 /γ 0 ) values than those of
the other hydrotropes, which could also explain the decreased syringic
acid solubility. Note that syringic acid solubility follows the expected
trend in the dilute range, but 1,2-hexanediol deviates as hydrotrope–water
interactions become more dominant with the increasing hydrotrope concentration,
reversing beyond ( x 2 = 0.1).
Conclusions In this work, we employed
thermodynamic modeling to understand
the interplay of pairwise interactions among API (1), hydrotrope (2),
and water (3), which plays a critical role in determining API solubility
in water. These interactions influence the location of maximum solubility
within the ternary system, helping to identify the most effective
hydrotropes for enhancing API solubility. The impact of liquid phase
nonideality and pairwise interactions on the solubility enhancement
of the API in a hypothetical ternary API/hydrotrope/water system was
investigated. It was found that more favored API–hydrotrope
interactions compared to the API–water and hydrotrope–water
interactions would significantly improve the API solubility in liquid
solution. On the other hand, if the hydrotrope–water interactions
are more favored than the API–hydrotrope interactions, the
solubility of API in the solution would decrease. Thus, the best scenario
for improving the solubility of API is to select a hydrotrope that
strongly interacts with the API but moderately interacts with water.
In , an overview
of API solubility change with the addition of hydrotrope for different
interaction strengths (weak, medium, or strong) between the API, hydrotrope,
and water, based on the values of the binary interaction parameters
( A 13 , A 12 ,
and A 23 ) is provided. The theoretical
findings were validated by using experimental solubility data for
syringic acid in water with various 1,2-alkanediols from the literature,
which confirmed our predictions about the impact of pairwise interactions
on API solubility. The findings of this work aim to streamline pharmaceutical
formulation
development by minimizing the experimental effort required to identify
effective hydrotrope candidates. The acquired knowledge of intermolecular
interactions and their influence on API solubility, combined with
thermodynamic models, enables the efficient screening and selection
of hydrotropes to achieve targeted solubility. This approach supports
the design of effective drug delivery systems and the development
of novel API–hydrotrope combinations that enhance the solubility
and bioavailability in water. The findings can also be extended to
other solvents, which play important roles in API synthesis and purification
(e.g., crystallization). Although this study focuses on theoretical
modeling, further research is recommended to assess the safety of
specific API–hydrotrope combinations and their optimal concentrations
in biological systems.
|
Immunopeptidomics for autoimmunity: unlocking the chamber of immune secrets | b7654947-22cc-41c4-a2c7-72d3807506b8 | 11747513 | Biochemistry[mh] | T cells form a highly antigen-specific arm of the adaptive immune system . T cells achieve this specificity through their surface T Cell Receptors (TCRs), which are diversified through V(D)J recombination to generate a large number of unique clones , . T cells can either have TCRalpha and beta chains (αβT cells), or TCRgamma and delta chains (γδT cells). Classically, αβT cells recognize peptide epitopes presented on MHC/HLA, whereas γδT cells recognize non-peptide ligands on non-classical MHC such as CD1d and MR1 , . While presentation through and recognition of non-classical MHC is important, here, we will focus on presentation of peptide epitopes to T cells by classical MHC. For the purposes of this review, we will use MHC and HLA interchangeably, with HLA as the preferred terminology for human alleles. There are two distinct classes of T cells, CD8+ T cells recognize epitopes on class I MHC, whereas CD4+ T cells recognize epitopes on class II MHC. The epitopes presented on class I MHC are typically 8-12 amino acids in length, whereas those presented by class II MHC can range from 10-25 amino acids , . Class I MHC are found on all nucleated cells; whereas class II MHC are found typically on professional antigen presenting cells (APCs) such as B cells, Dendritic Cells, and Macrophages , . Central and peripheral tolerance mechanisms typically restrict self-reactive T cells. T cells that recognize self-antigens in MHC molecules can be unleashed either as a failure of tolerance and/or therapeutic blockade of immune checkpoints , . The recognition of self epitopes presented on MHC is critical for developing autoimmunity. The pathways and sources of epitopes presented on these molecules are distinct, and together extensively sample the extracellular and intracellular proteome – . The MHC Class I pathway presents peptides from intracellular sources, such as viral or endogenous proteins, to CD8+ T cells . These proteins are degraded by the proteasome, and the resulting peptides are transported into the ER by TAP, where they are loaded onto class I MHC molecules. In contrast, the class II MHC pathway captures extracellular proteins, which are internalized, processed in endosomes, and presented on class II MHC molecules to CD4+ T cells. Cross-presentation involves the uptake of extracellular antigens by dendritic cells, which then process these antigens via either a vacuolar pathway or a cytosolic pathway. In the vacuolar pathway, antigens are degraded within endosomal compartments and loaded onto class I MHC molecules. In the cytosolic pathway, Dendritic cells can translocate the extracellular antigens into the cytosol, where they are processed by the proteasome. This mechanism allows Dendritic cells to activate CD8+ T cells and initiate cytotoxic responses against pathogens or tumors that do not directly infect them. The ability of MHC molecules to process and present epitopes from self and foreign proteins is critical for adaptive immunity – . Another important property of MHC is its high degree of polymorphism, with thousands of class I (HLA-A, HLA-B, HLA-C) and class II (HLA-DP, HLA-DQ, HLA-DR) alleles. Even inbred mouse strains have a diverse set of MHC (class I alleles H2-K, H2-D, H2-L and class II alleles H2-A and H2-E). This serves to diversify the T cell response to pathogens at a population level but adds complexity to studying the repertoire of peptides presented by them – . The HLA locus is one of the highest disease-associated genetic loci in autoimmunity , . Genetic variation in HLA alleles can confer protection from disease, e.g., HLA-DR2 in T1D or can contribute to the risk of developing autoimmunity, e.g., HLA-DQ2, DQ8, DR3, DR4, and HLA-A*02:01 in Type 1 Diabetes , . Similarly, in RA, HLA-DRB1*04:01, 04:04, 04:05, 01:01, and 10:01 are risk alleles , . The same HLA allele can be both protective and pathogenic for different autoimmune diseases. One of the most well-studied examples of such alleles is HLA-DR15 (HLA-DRB1*15:01), which confers heightened risk of developing Multiple Sclerosis, but is protective in T1D – . Variations in HLA alleles can predispose to or confer protection from autoimmunity by influencing the repertoire of peptides presented on them, and thereby shaping the adaptive immune response. Different HLA alleles have distinct peptide-binding preferences, resulting in unique peptide repertoires presented on the cell surface – . A striking example of how HLA-specific peptide preferences can positively and negatively influence immunity is found in HLA-B*27:05, a class I HLA allele. HLA-B*27:05 is well known to be enriched in individuals who can naturally control HIV infection, possibly by limiting viral escape from T cells – . On the other hand, HLA-B*27:05 is also strongly associated with Ankolysing Spondylitis , by possibly presenting self-peptides that may mimic microbial peptides. Another key set of studies providing a mechanistic explanation of how minute variations in HLA can affect immune repertoires comes from T1D. The accepted animal model of T1D is Non-Obese Diabetic (NOD) mice, which bear the class II MHC allele, I-Ag7. I-Ag7 possesses similar binding properties to the human T1D risk alleles, HLA-DQ8 and DQ2 . Studies have shown that these alleles have a neutral residue at the 57th position of the MHC-II β chain , resulting in a unique peptide-binding pocket. This limits the diversity of islet associated self-peptide repertoire, thereby restricting the islet-reactive TCR repertoire , . Upon modification of I-Ag7 to change the β57 residue to D/E, the TCR repertoire to insulin was altered, suggesting that changes at a single position in HLA/MHC can have profound effects on autoimmunity. These studies underscore the importance of identifying the peptides presented by disease-associated MHC/HLA to gain a better understanding of autoimmunity. An extensive literature and public database search was done to highlight some key HLA associations, epitopes, and antigens in context to autoimmune diseases which are listed in Table . The importance of MHC and epitope presentation has been documented since the mid-20 th century. Seminal work on tissue rejection and transplantation immunology in the early to mid-20th century, led to the discovery of the MHC genetic locus, first in mice (H-2) and then in HLA locus in humans . In the 1960s and 1970s, the major focus was on describing the genetics, structure, and function of MHC, revealing their critical role in antigen presentation to T cells – . Subsequent studies throughout the latter half of the 20th century revealed the finer details of T cell recognition of class I and class II MHC, establishing the principle of MHC as peptide receptors that present degraded proteins . The molecular basis of MHC restriction was firmly established by elucidating the crystal structure of HLA-A2 , and nearly a decade later, in 1996, the TCR-peptide-MHC complex . Our understanding of the landscape of epitopes presented by MHC has evolved alongside these discoveries. In the past decade, interest in identifying the peptides bound to MHC, collectively referred to as the ‘immunopeptidome’, has exploded across the biomedical research community , . This has been enabled by the yet nascent field of ‘immunopeptidomics’, which lies at the intersection of immunology and proteomics, and uses high-resolution mass spectrometry (MS) to identify and quantify the peptide repertoire presented by MHC . Since its inception, immunopeptidomics has facilitated our understanding of T cell responses, and has greatly enhanced the identification and profiling of antigen-specific T cells . Here, we will review how immunopeptidomics has been deployed to understand key autoantigens in autoimmune disorders. First, we will lay out the fundamentals and the technical considerations of implementing immunopeptidomics. Second, we will discuss how immunopeptidomics has transformed our understanding of post-translationally modified (PTM) epitopes. Third, we will discuss how immunopeptidomics has facilitated the identification of autoantigens and autoreactive T cells in three exemplary autoimmune disorders. Finally, we will summarize the outstanding challenges and provide future perspectives on utilizing immunopeptidomics for autoantigen discovery. Immunopeptidomics relies on MS for identification of protein fragments, similar to proteomics. However, given that peptide-MHC is a binary complex that is non covalently bound, peptides must be decoupled from MHC prior to MS. The four major steps in immunopeptidomics are: 1) Pulldown of MHC complexes using MHC-specific antibodies immobilized on beads; 2) Elution of peptides off of MHC using non-enzymatic mild acid treatment; 3) Peptide purification with either C18 reverse-phase separation or size-based methods such as size exclusion chromatography or filtering through a specific molecular weight cut off filter. Purified peptides are then subjected to MS; and 4) Bioinformatic identification of peptides from MS spectra. Each of these steps needs to be carefully designed, with several experiment-specific considerations. These are highlighted in Fig. . Some of the major considerations are: the need for a large number of starting cell numbers or amount of tissue, levels of MHC/HLA expression on the target cells, and the availability of antibodies specific to MHC/HLA alleles under investigation. The pioneering study that developed immunopeptidomics reported only tens of HLA-bound peptides in a single analysis from billions of cells. Recent advances in MS methods have enabled detection of peptides from limited sample with sensitivity, thus allowing for detection of antigens from primary cells and rare cell polulations . At present day, samples less than a billion cells can yield tens of thousands of peptides, including those with PTMs . The technical advances are largely fueled by Liquid Chromatography and tandem MS (LC-MS/MS), as well as sophisticated computational tools to identify and quantify spectra. For instance, peptide purification techniques have evolved to minimize peptide loss. They can range from using a molecular weight cut off filters or size exclusion chromatography to more sophisticated C18 reverse-phase chromatography that can separate compounds based on hydrophobicity. Advances in sample preparation during MS have also led to increased yields of peptides. For instance, a recent report used acetonitrile fractionation followed by introduction of ion mobility during gas phase separation to increase the number of detected peptides from the same samples by 2-5 fold . Similarly, Gravel et al., developed ion mobility separation-based time-of-flight (TOFIMS) MS to increase the sensitivity of immunopeptidomics . Several computational tools have led to increased sensitivity of detection and wide accessibility to users. For instance, ‘Immunolyser’ is a web-based tool that allows a standardized and streamlined workflow for immunopeptidomics that is accessible to researchers without any prior experience in MS . Similarly, SysteMHC Atlas v2.0 is a resource that has collected over 1 million peptides across over 7000 MS studies, and developed a suite of computational tools for analyses of PTMs, which has led to identification of over 470,000 modified peptides . A key consideration for calling peptides is also the selection of the appropriate databases. For instance, using the annotated genome will miss a large number of peptides that may be derived from unannotated ORFs , . Moreover, implementing machine learning algorithms such as PROSIT, and MS2rescore, MSbooster have led to increased sensitivity and reduced false negative rates – . Finally, while all these advances have enhanced peptide detection individually, combining them in various ways has synergistically led to significantly better outputs from immunopeptidomics . PTMs refer to chemical modifications of amino acid side chains occurring after their translation . PTMs can profoundly impact the structure, function, and localization of proteins . Of the 400 different types of PTMs that have been described in humans, phosphorylation, acetylation, and ubiquitination occur most frequently and are the most-well studied – . PTMs also alter the immunogenicity of peptide epitopes, which can be important in autoimmune disorders such as T1D and RA. The first demonstration of a PTM epitope presented on MHC was in melanoma, where a Tyrosinase derived epitope was found to be deamidated . Since then, numerous studies have profiled PTM epitopes on MHC – . It is estimated that peptides containing PTMs make up ~10% of the human immunopeptidome – . Importantly, dysregulation of PTMs is increasingly being implicated in the pathogenesis of autoimmune disease , . The contribution of PTMs to autoimmunity is manifold, involving a combination of host genetic factors and environmental exposures. Mechanistically, PTM of self-proteins can generate new epitopes, known as neoepitopes, capable of eliciting robust immune responses and breaking immune tolerance . PTM epitopes can alter binding to MHC and/or TCRs, thereby creating immunogenic neoepitopes – . PTMs add a layer of complexity to immunopeptidomics studies because of their low abundance, altered spectral profiles, and computational hurdles in identification , . PTMs may also be detected as an artifact of ionization that occurs during MS. It is appreciated that the immunopeptidomes of class I and II MHC differ in the types, positions, and ratios of PTMs . These preferences likely reflect: a) distinct research questions and model systems used, b) inherent differences in antigen processing and presentation between class I and II MHC, and c) diversity of pathways leading to PTMs , . While PTMs are often a small fraction of the immunopeptidome, their importance as autoantigens is outsized. For instance, a major epitope known as 2.5HIP (a hybrid peptide formed by post-translational splicing of Insulin and Chromogranin), was shown to be an essential antigen in NOD mice. However, it is often not detected in immunopeptidomics datasets. The importance of PTMs may often be disease-specific, and therefore PTM identification might not be relevant in all cases. Recent advances in single cell TCR sequencing have led to a large number of disease-associated T cells being profiled, however, the knowledge of their cognate epitopes is lagging behind by orders of magnitude . Experimentally, there are two types of approaches used for T cell epitope discovery. Antigen-directed approaches start with a limited number of (typically <1000) peptides and aim to identify T cells responsive to them. For instance, Wang et al. characterized the immunopeptidome of HLA-DR15 and identified self-epitopes and their microbial derived mimics as autoantigens in Multiple Sclerosis . On the other hand, TCR-directed approaches start with key TCRs and screen them against large epitope libraries (up to 10^7). We have recently identified novel T1D autoantigens using cell-based epitope libraries that were derived from a mouse pancreatic islet immunopeptidomics study , . In both cases, the knowledge of the peptides that were actually presented on MHC/HLA augmented antigen discovery by narrowing down the possible universe of epitopes recognized by T cells under investigation , , . In case of autoimmunity, this is especially important, as the scale of potential self-epitopes is genome-wide. In addition to the ~20,000 annotated coding genes, there are >10,000 unannotated or non-canonical open reading frames that contribute to the immunopeptidome. Furthermore, epitopes with PTMs add to this landscape of potential autoantigens. Immunopeptidomics can be used to scale down the number of epitopes under investigation, allowing better throughput and more targeted antigen discovery. In the next section, we will describe how immunopeptidomics approaches have helped autoantigen discovery in specific cases of autoimmune diseases: T1D, SLE, and RA. Type 1 Diabetes T1D or Autoimmune Diabetes, is a chronic disease caused by insulin deficiency due to the destruction of the insulin-producing β cells in the pancreatic islets of Langerhans , . Autoantibodies against insulin, the 65-kDa form of glutamic acid decarboxylase (GAD65), insulinoma-associated protein 2 (IA-2), and zinc transporter 8 (ZnT8) are associated with T1D but their role in the pathophysiology of the disease is not clear . It has been shown that autoreactive CD4+ and CD8+ T cells infiltrate the pancreas and mediate destruction of β cells. CD4+ T cells can propagate a pro-inflammatory environment through cytokine secretion and enhancing the function of cytotoxic CD8 + T cells, which can directly kill β cells. T1D can be modeled in NOD mice, which share several key features of disease including the presence of islet autoantibodies and infiltration of autoreactive CD4+ and CD8 + T cells in islets – . Islet-infiltrating T cells in NOD mice and T1D are known to recognize β cell autoantigens, many of which are overlapping. The restricted MHC diversity as well as availability of samples and reproducibility of disease course have allowed robust immunopeptidomics studies in NOD mice , . In contrast, the high HLA heterogeneity and limited access to viable β cells have impeded similar studies in humans. Islets harvested from cadaveric donors with T1D have very few β cells remaining, and those from donors without T1D have naturally low levels of HLA expression in the absence of inflammation, making direct detection of presented peptides challenging. Moreover, even in inflamed islet, class II HLA expression on β cells is low – . Therefore, the characterization of the HLA bound peptides from human β cells has been limited , . To circumvent these issues, approaches such as stably transfected human non β cell lines expressing specific autoantigen(s) and cell surface HLA allotypes of interest , , or human β cell lines generated by targeted oncogenesis have been used. A recent finding in T1D was the presence of hybrid insulin peptides (HIPs), which consist of epitopes generated by post-translational splicing of Insulin with other proteins. Studies have shown that HIPs are autoantigens for pathogenic CD4+ T cells in the human T1D and in NOD mice , , confirming the notion that PTM epitopes are key autoantigens , . Importantly, while the initial experiments with HIPs used synthetic peptides, their presence in the proteomes and immunopeptidomes derived from β cells has reaffirmed that HIP formation and recognition is a natural process that occurs in T1D . Subsequent studies based off of these results have led to the identification of novel HIPs as diabetogenic epitopes. We wish to highlight two recent studies that have effectively combined immunopeptidomics and antigen discovery approaches to identify novel autoantigens in T1D. Gonzalez-Duque et al. performed class I HLA immunopeptidomics on a human β cell line, ECN90, and on islets, and identified ~3000 peptides including native peptides, PTM epitopes, splice variants, and transpeptidation products (which are similar to HIPs, but their existence is still debated). Using synthetic peptides and peptide-MHC multimers, the authors identified several novel autoantigens including insulin gene enhancer protein ISL-1 and UCN3. T cells recognizing these autoantigens were shown to be enriched in the pancreata of T1D donors as compared with non-diabetic donors . In the second study, Wan et al. performed class II MHC immunopeptidomics on pancreatic islets and draining lymph nodes in NOD mice, and identified >4000 peptides bound to I-Ag7 . They also identified many PTM epitopes, splice variants, and HIPs. Using this immunopeptidomics dataset, our group built epitope libraries presenting >4000 epitopes in I-Ag7, and identified targets of islet-infiltrating T cells de novo, and found a predominance of HIP-reactive T cells . These studies exemplify that combining immunopeptidomics with antigen discovery can be a powerful strategy for identifying autoantigens. Systemic Lupus Erythematosus SLE is a multisystem, chronic autoimmune disease involving a complex interaction of impaired apoptotic clearance, complement activation, and immune complex formation which leads to dysregulated innate and adaptive immunity – . SLE is characterized by the presence of autoantibodies to nuclear and cytoplasmic antigens . While the importance of B cells and anti-nuclear antibodies in SLE pathogenesis is appreciated, tissue infiltrating T cells also play a key role . The antigenic landscape of autoreactive T cells in SLE is poorly defined, with a handful of known autoantigens, such as histones, described to date , . Interestingly, histones and other nuclear proteins are broadly modified post-translationally , but whether these PTMs lead to immunogenic epitopes is not known. Proteomics profiling of tissues and plasma in SLE patients and mouse models have shown changes in the soluble proteome associated with inflammation and immune dysregulation , . Antibodies to canonical autoantigens in SLE such as like Smith, RO, La, and histones, have been detected in patient sera, and serve as disease biomarkers , . First identification of T cell specificities in SLE came from curating a small list of potential autoantigens from proteomics datasets. Critically, the link between SLE proteomes and SLE immunopeptidomes is missing, largely due to the lack of immunopeptidomics data from mice or humans. SLE, unlike T1D, has a high level of heterogeneity in disease course, target tissues, and environmental triggers, therefore making it challenging to hone in on the key antigen-presenting populations. Only a small number of studies have reported immunopeptidomes in mouse models of SLE. In the early 2000s, Freed et al. characterized the peptides eluted from class II MHC from spleens in a SLE-prone mouse model, I-Ak or I-Ek alleles in MRL/lpr mice. A very small number ( < 20) peptides were detected, including some from potential SLE autoantigens such as histones . The study only uncovered a small number of peptides due to technical limitations like lower limit of detection and low abundance peptides . We have recently performed immunopeptidomics on the kidneys of MRL/lpr mice, which is the primary pathologic site in SLE. We identified >3000 epitopes presented on I-Ek in kidneys of MRL/lpr mice, and has used interaction language models to predict potential immunogens. In concordance with the previous reports, we did indeed detect peptides derived from histones and ribosomal proteins . In addition, we have developed an algorithm that will be able to predict HLA restriction of peptides previously not studied, this will advance our ability to tackle the HLA diversity. We believe that with the recent technological advances in immunopeptidomics, time is ripe to deploy it for autoantigen discovery in SLE. However, several key considerations still need to be taken into account, including the HLA diversity in humans, availability of kidney tissue from patients, and possible PTMs. Moreover, profiling the immunopeptidome will need to be followed by experimental validation of their immunogenicity in mice and humans. The application of immunopeptidomics will be essential for identifying autoantigens in SLE that will serve as diagnostic and therapeutic targets. Rheumatoid Arthritis RA is a systemic, inflammatory autoimmune disease, characterized by immune infiltration into the synovial joints, leading to varying degrees of functional impairment among patients . A prognostic hallmark of RA is the presence of autoantibodies that recognize self-proteins harboring PTMs like citrullination, homocitrullination (carbamylation), and acetylation – . Genetic association with certain HLA-DR alleles and the presence of anti-citrullinated protein antibodies (ACPAs), suggests a pathophysiological role of CD4+ T cells in disease , . While CD4+ T cell infiltration in the synovial tissue is characteristic of RA, the precise autoantigens recognized by them are poorly defined , . Most studies in RA have focused predominantly on the HLA-DR immunopeptidome, given that multiple HLA-DR molecules are strongly associated with the disease. Among the most well-described risk alleles for RA is the shared epitope (SE), a set of HLA-DRB1 alleles containing a consensus amino acid sequence in residues 70-74 of the HLA-DRβ chain . SE positivity strongly correlates with ACPA positivity and is associated with earlier onset of RA, increased disease severity, and higher mortality , . SE-containing HLA-DR alleles are thought to have enhanced presentation of arthritogenic antigens, including citrullinated peptides, leading to selection autoreactive T cell repertoires – and promotion of ACPA formation . In a 2010 study, 1427 HLA-DR-presented peptides, derived from 166 source proteins were identified in the synovia of two RA patients . Another study examining clinical samples of synovial tissue, synovial fluid mononuclear cells, and peripheral blood mononuclear cells identified 1593 peptides originating from 870 source proteins . A key mechanistic insight into how certain HLA alleles influence the genetic risk was obtained through immunopeptidomics studies comparing 962 unique peptides bound to strongly RA-associated DRB1*01:01, DRB1*04:01, and DRB1*10:01 alleles and non-RA-associated DRB1*15:01 allele . It was found that the peptide repertoires differed largely in terms of size, protein origin, composition, and affinity, with only about 10% overlap among RA-associated allotypes. Such empirical data on allelic binding preferences will enhance bioinformatics-based prediction tools that infer peptide repertoires. For example, Darrah et al. used the NetMHCII-2.3 binding prediction algorithm combined with proteolytic mapping to predict binding affinities for peptides derived from native and citrullinated antigens to RA-associated SE alleles (i.e., DRB*01:01, *04:01, *04:05, and *10:01). They demonstrated that structural changes induced by citrullination alter susceptibility to proteolytic cleavage, thus modulating antigen processing and revealing cryptic epitopes . Additionally, Kaabinejadian et al. used MHCMotifDecon on existing immunopeptidomic datasets and found that the secondary DR alleles (HLA-DRB3, DRB4, and DRB5), often overlooked due to their strong linkage disequilibrium with the primary HLA-DRB1 allele, contributed significantly to the HLA-DR repertoire. They posit that secondary DR alleles, which display non-redundant and complementary peptide repertoires, warrant regard as functionally independent alleles in future studies . This mechanistic understanding of RA susceptibility and HLA-associated variability would not have been possible without immunopeptidomics. T1D or Autoimmune Diabetes, is a chronic disease caused by insulin deficiency due to the destruction of the insulin-producing β cells in the pancreatic islets of Langerhans , . Autoantibodies against insulin, the 65-kDa form of glutamic acid decarboxylase (GAD65), insulinoma-associated protein 2 (IA-2), and zinc transporter 8 (ZnT8) are associated with T1D but their role in the pathophysiology of the disease is not clear . It has been shown that autoreactive CD4+ and CD8+ T cells infiltrate the pancreas and mediate destruction of β cells. CD4+ T cells can propagate a pro-inflammatory environment through cytokine secretion and enhancing the function of cytotoxic CD8 + T cells, which can directly kill β cells. T1D can be modeled in NOD mice, which share several key features of disease including the presence of islet autoantibodies and infiltration of autoreactive CD4+ and CD8 + T cells in islets – . Islet-infiltrating T cells in NOD mice and T1D are known to recognize β cell autoantigens, many of which are overlapping. The restricted MHC diversity as well as availability of samples and reproducibility of disease course have allowed robust immunopeptidomics studies in NOD mice , . In contrast, the high HLA heterogeneity and limited access to viable β cells have impeded similar studies in humans. Islets harvested from cadaveric donors with T1D have very few β cells remaining, and those from donors without T1D have naturally low levels of HLA expression in the absence of inflammation, making direct detection of presented peptides challenging. Moreover, even in inflamed islet, class II HLA expression on β cells is low – . Therefore, the characterization of the HLA bound peptides from human β cells has been limited , . To circumvent these issues, approaches such as stably transfected human non β cell lines expressing specific autoantigen(s) and cell surface HLA allotypes of interest , , or human β cell lines generated by targeted oncogenesis have been used. A recent finding in T1D was the presence of hybrid insulin peptides (HIPs), which consist of epitopes generated by post-translational splicing of Insulin with other proteins. Studies have shown that HIPs are autoantigens for pathogenic CD4+ T cells in the human T1D and in NOD mice , , confirming the notion that PTM epitopes are key autoantigens , . Importantly, while the initial experiments with HIPs used synthetic peptides, their presence in the proteomes and immunopeptidomes derived from β cells has reaffirmed that HIP formation and recognition is a natural process that occurs in T1D . Subsequent studies based off of these results have led to the identification of novel HIPs as diabetogenic epitopes. We wish to highlight two recent studies that have effectively combined immunopeptidomics and antigen discovery approaches to identify novel autoantigens in T1D. Gonzalez-Duque et al. performed class I HLA immunopeptidomics on a human β cell line, ECN90, and on islets, and identified ~3000 peptides including native peptides, PTM epitopes, splice variants, and transpeptidation products (which are similar to HIPs, but their existence is still debated). Using synthetic peptides and peptide-MHC multimers, the authors identified several novel autoantigens including insulin gene enhancer protein ISL-1 and UCN3. T cells recognizing these autoantigens were shown to be enriched in the pancreata of T1D donors as compared with non-diabetic donors . In the second study, Wan et al. performed class II MHC immunopeptidomics on pancreatic islets and draining lymph nodes in NOD mice, and identified >4000 peptides bound to I-Ag7 . They also identified many PTM epitopes, splice variants, and HIPs. Using this immunopeptidomics dataset, our group built epitope libraries presenting >4000 epitopes in I-Ag7, and identified targets of islet-infiltrating T cells de novo, and found a predominance of HIP-reactive T cells . These studies exemplify that combining immunopeptidomics with antigen discovery can be a powerful strategy for identifying autoantigens. SLE is a multisystem, chronic autoimmune disease involving a complex interaction of impaired apoptotic clearance, complement activation, and immune complex formation which leads to dysregulated innate and adaptive immunity – . SLE is characterized by the presence of autoantibodies to nuclear and cytoplasmic antigens . While the importance of B cells and anti-nuclear antibodies in SLE pathogenesis is appreciated, tissue infiltrating T cells also play a key role . The antigenic landscape of autoreactive T cells in SLE is poorly defined, with a handful of known autoantigens, such as histones, described to date , . Interestingly, histones and other nuclear proteins are broadly modified post-translationally , but whether these PTMs lead to immunogenic epitopes is not known. Proteomics profiling of tissues and plasma in SLE patients and mouse models have shown changes in the soluble proteome associated with inflammation and immune dysregulation , . Antibodies to canonical autoantigens in SLE such as like Smith, RO, La, and histones, have been detected in patient sera, and serve as disease biomarkers , . First identification of T cell specificities in SLE came from curating a small list of potential autoantigens from proteomics datasets. Critically, the link between SLE proteomes and SLE immunopeptidomes is missing, largely due to the lack of immunopeptidomics data from mice or humans. SLE, unlike T1D, has a high level of heterogeneity in disease course, target tissues, and environmental triggers, therefore making it challenging to hone in on the key antigen-presenting populations. Only a small number of studies have reported immunopeptidomes in mouse models of SLE. In the early 2000s, Freed et al. characterized the peptides eluted from class II MHC from spleens in a SLE-prone mouse model, I-Ak or I-Ek alleles in MRL/lpr mice. A very small number ( < 20) peptides were detected, including some from potential SLE autoantigens such as histones . The study only uncovered a small number of peptides due to technical limitations like lower limit of detection and low abundance peptides . We have recently performed immunopeptidomics on the kidneys of MRL/lpr mice, which is the primary pathologic site in SLE. We identified >3000 epitopes presented on I-Ek in kidneys of MRL/lpr mice, and has used interaction language models to predict potential immunogens. In concordance with the previous reports, we did indeed detect peptides derived from histones and ribosomal proteins . In addition, we have developed an algorithm that will be able to predict HLA restriction of peptides previously not studied, this will advance our ability to tackle the HLA diversity. We believe that with the recent technological advances in immunopeptidomics, time is ripe to deploy it for autoantigen discovery in SLE. However, several key considerations still need to be taken into account, including the HLA diversity in humans, availability of kidney tissue from patients, and possible PTMs. Moreover, profiling the immunopeptidome will need to be followed by experimental validation of their immunogenicity in mice and humans. The application of immunopeptidomics will be essential for identifying autoantigens in SLE that will serve as diagnostic and therapeutic targets. RA is a systemic, inflammatory autoimmune disease, characterized by immune infiltration into the synovial joints, leading to varying degrees of functional impairment among patients . A prognostic hallmark of RA is the presence of autoantibodies that recognize self-proteins harboring PTMs like citrullination, homocitrullination (carbamylation), and acetylation – . Genetic association with certain HLA-DR alleles and the presence of anti-citrullinated protein antibodies (ACPAs), suggests a pathophysiological role of CD4+ T cells in disease , . While CD4+ T cell infiltration in the synovial tissue is characteristic of RA, the precise autoantigens recognized by them are poorly defined , . Most studies in RA have focused predominantly on the HLA-DR immunopeptidome, given that multiple HLA-DR molecules are strongly associated with the disease. Among the most well-described risk alleles for RA is the shared epitope (SE), a set of HLA-DRB1 alleles containing a consensus amino acid sequence in residues 70-74 of the HLA-DRβ chain . SE positivity strongly correlates with ACPA positivity and is associated with earlier onset of RA, increased disease severity, and higher mortality , . SE-containing HLA-DR alleles are thought to have enhanced presentation of arthritogenic antigens, including citrullinated peptides, leading to selection autoreactive T cell repertoires – and promotion of ACPA formation . In a 2010 study, 1427 HLA-DR-presented peptides, derived from 166 source proteins were identified in the synovia of two RA patients . Another study examining clinical samples of synovial tissue, synovial fluid mononuclear cells, and peripheral blood mononuclear cells identified 1593 peptides originating from 870 source proteins . A key mechanistic insight into how certain HLA alleles influence the genetic risk was obtained through immunopeptidomics studies comparing 962 unique peptides bound to strongly RA-associated DRB1*01:01, DRB1*04:01, and DRB1*10:01 alleles and non-RA-associated DRB1*15:01 allele . It was found that the peptide repertoires differed largely in terms of size, protein origin, composition, and affinity, with only about 10% overlap among RA-associated allotypes. Such empirical data on allelic binding preferences will enhance bioinformatics-based prediction tools that infer peptide repertoires. For example, Darrah et al. used the NetMHCII-2.3 binding prediction algorithm combined with proteolytic mapping to predict binding affinities for peptides derived from native and citrullinated antigens to RA-associated SE alleles (i.e., DRB*01:01, *04:01, *04:05, and *10:01). They demonstrated that structural changes induced by citrullination alter susceptibility to proteolytic cleavage, thus modulating antigen processing and revealing cryptic epitopes . Additionally, Kaabinejadian et al. used MHCMotifDecon on existing immunopeptidomic datasets and found that the secondary DR alleles (HLA-DRB3, DRB4, and DRB5), often overlooked due to their strong linkage disequilibrium with the primary HLA-DRB1 allele, contributed significantly to the HLA-DR repertoire. They posit that secondary DR alleles, which display non-redundant and complementary peptide repertoires, warrant regard as functionally independent alleles in future studies . This mechanistic understanding of RA susceptibility and HLA-associated variability would not have been possible without immunopeptidomics. Overall, immunopeptidomics has provided valuable insights into the pathogenesis of autoimmune diseases and has helped to define key autoantigens. While immunopeptidomics studies in autoimmune diseases are often not directly translatable, they have tremendous potential to fuel diagnostic and therapeutic approaches. Generation of large datasets of MHC/HLA-bound peptides have led to the advances in computational tools to predict epitope binding , , . These algorithms can be then used to predict potential autoantigens. Immunopeptidomics datasets have directly fueled systematic antigen discovery approaches which are leading to using antigen-specific TCR repertoires as diagnostic tools. Moreover, novel autoantigens can be directly used as immunogens to induce tolerance or as targets for immunomodulatory strategies. Furthermore, immunopeptidomics has revealed mechanistic aspects of development of autoimmunity, such as cross-reactivity with microbial epitopes, changes in immune landscapes associated with disease states, and PTM-driven alterations in HLA binding and T cell recognition. Identifying the peptides that are presented on MHC/HLA has led to several key advances in autoimmunity including autoantigen identification, confirmation of the biological relevance for HIPs, and to an increasing appreciation for PTM epitopes as pathogenic. Several key considerations remain in the design and utility of immunopeptidomics studies, as highlighted in Fig. . As the experimental techniques to elute and detect peptides and computational tools to analyze datasets advance, we envision that the numbers of identified peptides will continue growing exponentially. This will enhance our understanding of the pathogenic antigens across autoimmune disorders. In summary, as immunopeptidomics opens this chamber of secrets, a treasure trove of autoantigens will be discovered. |
Pediatric Condition Falsification Misdiagnosed by Misjudged Weight Growth from the Curve of Measured Weights | d304360c-c99c-423a-8acd-10bd4e0ef32e | 6053948 | Pediatrics[mh] | Pediatric condition falsification (PCF) is a form of child abuse in which a caregiver, frequently the mother, fabricates or induces illness in the child. Other terminology used in the literature is: Munchausen Syndrome by Proxy, Fabricated or Induced Illness by Caregivers (FII), Factitious Disorder Imposed upon Another, Factitious Disorder by Proxy and, more generally, Medical Child Abuse. PCF is rare and epidemiological studies suggest that it affects at least 0.5–2.0 per 100 000 children aged under 16 years, and McClure et al. reported that the rate is at least 2.8 per 100 000 children under 1 year of age [ – ]. An editorial in The Lancet stated that “ The best epidemiological studies to date show that health professionals are likely to encounter at least one case of FII during their careers, with pediatricians seeing many more ” . The pediatric author of the present paper encountered 3 cases during a period of 40 years. Perhaps contradictory to these statistics (which have been criticized based on double-counting certain case studies in separate articles ) is that the diagnosis of PCF, per exclusionem, has been stated as being difficult, controversial, and with a considerable likelihood that false positives occurring [ , – ]. An example, Eichner’s seminal publication (page 304) states that mitochondrial disease might be mistaken for PCF and is about 11 times more prevalent; Therefore, when a physician cannot find a diagnosis explaining the symptoms of a child, the diagnosis of PCF might be made. When a physician misses a correct diagnosis, PCF might wrongly be supposed. Symptoms may exist even when no diagnosis can be made. The diagnosis of PCF is generally assumed to require proof of all of Rosenberg’s 5 criteria : All other diseases that could explain the symptoms are excluded. Separation of child from the caregiver resolves the symptoms. Standard treatments are ineffective. There is objective evidence that the caregiver lies about the symptoms. The caregiver seeks inappropriately for second opinions. Unexplained failure to achieve a normal increase in weight, failure-to-thrive (FTT), is one of the conditions for which the diagnosis PCF is considered . It requires both the exclusion of a vast list of known causes of FTT and, importantly, an accurate evaluation of the weight curve, also referenced by Pankratz on page 314. We present a case in which the pediatrician diagnosed PCF “ with 100% certainty”, which in itself is very unlikely , a diagnosis that was incorrect due to a misjudged weight gain velocity from the curve of measured weights. To the best of our knowledge, this is the first well-documented report of this association. In this case report we aimed to identify why this false-positive diagnosis of PCF occurred. We believe it is important for what follows to give the definition of (weight) growth. From elementary physics, any form of growth is always expressed proportional to reciprocal time (e.g., 1/year). Growth of weight, expressed as weight gain velocity, is defined as weight gain velocity [ kg / year ] = weight gained a certain time period [ kg ] duration of that time period [ year ]
The youngest son of normal parents (sixth child, born at term, 3.18 kg birthweight, P25 or −0.6SD standard weight curve) grew along the −2SD weight curve until about 56 days of age, after which he developed a slightly negative weight growth (days 56–120), becoming seriously underweight (see ) and requiring hospitalization (99–114 days). Hirsprung’s disease was excluded following a colon biopsy. Cow milk allergy was suspected because of frequent episodes of obstipation and undue crying and anxiety after food intake. Without further testing, feeding was subsequently changed to elementary formula. An increased calprotectin level in feces (values between 250 and 1200 versus 50 μg/g feces normal) was found, likely due to cow milk allergy. Tests identified an allele-22 deletion on chromosome 9, but the same mutation was found in the healthy father, so it was considered clinically insignificant. The cow milk allergy might explain other signs and symptoms of the infant: sleeping disorder, frequent periods of obstipation, abdominal cramps, airway infections, and colds. Despite extensive investigations, including immunology, endocrinology, and metabolic disorders, no other explanation for the low weight gain during days 56–120 was found. From day 140, elementary formula feeding was given through a nasogastric tube. Two more hospitalizations occurred (days 155–161 and 315–331). Remarkably, following a period of weight loss (from 8 to 7.35 kg) caused by gastroenteritis (days 337–346), the pediatrician reported: “ -- he lost weight for 4 more weeks ”; but in reality, weight increase clearly recommenced immediately after the period of sickness ( ). Nevertheless, the infant remained underweight ( ). The pediatrician continuously interpreted low weight as inadequate growth and increased tube feeding to 3 times normal (2.8 liters/24 h daily). The pediatrician wrote the following conflicting remarks: “ it is worrying that he does not grow given the enormous food intake ”, and: “ there is no medical explanation of how the boy can handle so many calories” . To “prove” that the boy did not receive these calories at home, he was hospitalized (days 489–502). Identical caloric intake caused severe vomiting, as the mother previously experienced at home and had reported to the pediatrician. The pediatrician denied the mother’s statements of having given the prescribed calories and that vomiting also occurred at home, and diagnosed the mother with PCF, but without documented consultations with experienced colleagues, violating the guidelines of the Royal Dutch Medical Association. The infant was subsequently separated from his family. In the Dutch privacy of the juvenile court system, assigning PCF to the mother was defended by the first pediatrician and 3 child protection agencies, stating that she had malnourished her boy on purpose and that the boy’s safety required that he had to remain separated from his parents. The judge, however, disagreed with them and ordered that the boy was to be returned home. The pediatrician was “ flabbergasted ”, appealed this conviction, and found a colleague (second) pediatrician who supported the PCF diagnosis. During the second court session, now with 3 judges (also including the first judge), some written quotes are: “ it’s difficult to explain how he only grew during hospitalizations ” (second pediatrician) and “ separation from his parents reversed his growth towards normal ” (National Child Protection Counsel). Both pediatricians and the Counsel declared that all Rosenberg criteria applied in this case. The 3 judges of the second law court confirmed PCF and prolonged the separation of the infant from his parents. The parents appealed this conviction but the third court of appeal reconfirmed PCF. The parents then decided not to appeal further to the Dutch Supreme Court. Eventually, the infant returned home after 8 months, albeit under legal supervision. Prior to this last court session, even a third pediatrician (from the same hospital as the second pediatrician) confirmed PCF, commenting on the boy’s weight growth during days 346–489: “ The weight gain velocity remains continuously negative compared to 0SD; it should have been strongly positive (catch-up growth following malnutrition) ”, supported by a fourth child protection agency. Earlier psychiatric and police investigations cleared the parents unconditionally from being instrumental in the boy’s FTT. Except for the first judge during the first court session, all judges subsequently discarded our analysis of the boy’s weight curve in which we showed beyond doubt that weight gain velocities were even above normal at home and that none of Rosenberg’s criteria could have applied. Analysis ( , ) shows unmistakably that weight gain velocities at home always exceeded those of the 0SD curve from 120 days onward by factors varying between 1.3 and 2.3. During separation, the infant grew 2 times slower (not faster, as was stated by the National Child Protection Counsel) than previously at home (3.1 versus 6.2 kg/year, , ), although still stronger than the 0SD (2.4 kg/year). Interestingly, the first 16 weeks of being back home again showed an increased weight growth compared to the separation period, from 3.1 to 5 kg/year. Subsequently, the boy developed completely normally, albeit with susceptibility to nasal colds.
Why did 3 pediatricians and 3 child protection physicians supported by 4 child protection organizations wrongly judge that this infant had FTT beyond day 120 and that it was caused by PCF of the mother? The answer to this question has scientific as well as sociologic components. Scientifically, during days 56–400, the infant’s weight was below −2SD. However, discussions about the criteria for defining FTT show the following. If the child is doing well, this is contradictory to FTT. Also, the current most important FTT criterion is lack of adequate growth. Olsen concluded in her review that “ Weight gain is the predominant choice of indicator ”. And “ For the time being, FTT predominantly seems to be used to describe children with slow or falling weight gain ”. Thus, FTT is defined as insufficient weight gain velocity, not as low weight [ – ]. Not all infants grow above −2SD. Thus, a normal weight gain (see ), also for weights below −2SD, is not a sign of abnormality . Consequently ( , ), the period of FTT in this case lasted only 9 weeks, from 56 to 120 days. This was explained by a cow milk allergy and because he gained weight at a normal rate when on elementary feeding. He reached his birth weight curve (P25, −0.6SD) around day 516, 2 weeks after the separation period started, obviously showing that he achieved a significantly above-normal average weight gain velocity at home (after day 120, between 17.1 and 6.2 kg/year versus 7.8 and 3 of 0SD, during a period of more than 1 year, , ). This is in contrast with the third pediatrician’s estimate by a factor of at least 2.1. Stronger catch-up weight velocity was seen after the introduction of solid foods. The judges ordered weighing the infant weekly. It is, however, impossible to evaluate weight gain velocity from weekly intervals since a full or empty bladder or colon can cause the same weight difference as a weekly weight gain. Most likely, the pediatricians were impressed by the weight below −2SD and confused by the weekly changes in weight. Low weight was wrongly interpreted as low weight gain velocity. The many physicians making this mistake suggests that we may have identified a hitherto unknown false-positive PCF diagnostic mechanism. The sociological components basically cover the expected hurdles that all pediatricians, child protection physicians, and agencies kicked over to come to a (false) conclusion. First, the conclusion of the first pediatrician could have been supported by the other physicians and child protection agencies just because of disinterest in challenging a colleague or of being reluctant to consider alternative hypotheses, a situation that may frequently occur . Second, a “ confirmatory bias ” may have developed, in which any information is interpreted negatively, even if contradictory to a PCF diagnosis. An example is the bizarre 3-times normal food intake (first pediatrician) before and during the fourth hospitalization. This implies an intake of 2.64 liters/day plus drinking of another 0.9 liters/day, giving a total food intake of about 0.4 liters/kg/day (for comparison, an adult would then have to drink 30 liters/day). Such a feeding includes a dangerous amount of proteins of about 10 gr/kg/day (1.5 is normal), which could increase blood urea concentrations to intoxication levels and also a dangerous dose of vitamin A according to Dutch Nutrition Center guidelines. Another example is the mother’s nightly stay with her son (third hospitalization) where intentional observations acknowledged her as loving and worrying, nevertheless making the first pediatrician state in court that: “ but this does not prove that she is innocent ”. Third, separation of the boy from his family, called “ separation test ”, is criticized by Pankratz and Wrennall , stating that “ In case after case, the separation test is manipulated ---- ” , and “ --- the separation test is likely to produce massive numbers of false-positive diagnoses of child abuse “ , as indeed occurred in this case by falsely reporting that weight growth velocity prior to separation was smaller than during separation ( ). Fourth, the importance of listening to the parents was violated by all physicians and agencies, particularly by the first pediatrician, whose denial that the mother had reported the boy’s heavy vomiting at home, and thus “she had lied”, became the ultimate proof of PCF. This additional example of confirmation bias calls for awareness and the need for training of pediatricians and child protection physicians on the importance of listening to parents. Fifth, the argument that a parent’s denial of guilt is further evidence of PCF actually was used by all 4 child protection agencies. Our analysis produces totally different answers to Rosenberg’s criteria than in the written statement of the Dutch National Child Protection Counsel. We showed that cow milk allergy caused the 2 months of weight loss and thus FTT; that weight gain velocities at home beyond day 120 were even much stronger than 0SD ( ); and that the mother never lied about her boy’s symptoms and never inappropriately sought second opinions. This proves beyond any doubt that PCF of a caregiver has no relationship with this case.
Our first, science-based, conclusion is that this case report confirms that PCF can easily be misdiagnosed, which emphasizes that pediatricians and child protection physicians must be more careful than demonstrated here to consider temporary (in this case 9 weeks) FTT as a sign of PCF. Also, this is the first well-documented case demonstrating that 6 physicians were likely unable to correctly assess weight growth from a weight curve, which resulted in a false-positive PCF diagnosis. Correct analysis, requiring very simple and elementary differential calculus, such as determining the (average) weight gain over a certain age period and dividing it by that period, , can prevent this perplexing and likely novel cause of misdiagnosis from occurring again. Our second, sociology-based, conclusion comprises a number of issues that can contribute to PCF misdiagnosis, such as reluctance of physicians to confront a colleague with alternative hypotheses, confirmation bias in which any information contradictory to PCF will be disregarded or played down, the false-positive likelihood of the separation test, the importance of listening to parents by pediatricians, and the fact that denial of guilt is considered further evidence of guilt.
|
9d9fe0db-f94d-428f-9272-157bae93caef | 9535100 | Anatomy[mh] | Colorectal cancer (CRC) remains a leading cause of worldwide cancer‐related mortality. To improve survival outcomes, novel biomarkers and therapeutic targets need to be identified. Over the past decade, treatment of certain cancer types has been revolutionised by the adoption of immune checkpoint blockade; however, this is only utilised successfully in a subset of CRC patients with advanced‐stage mismatch repair deficient (dMMR) disease . The observed variety in response to these targeted immunotherapies is attributed to the immunosuppressive nature of microsatellite stable (MSS) tumours and heterogeneity associated with CRC. MSS tumours account for ~85% of CRC and are characterised by lower immune infiltration, immune exclusion, and decreased presence of neoantigens compared to dMMR tumours . These MSS tumours generally have a lower tumour mutational burden which results in decreased major histocompatibility complex expression on the surface of antigen‐presenting cells, further exacerbating the immunosuppressive nature of MSS disease . Tumour heterogeneity can be driven by dysregulation of many different cellular signalling networks leading to promotion of the hallmarks of cancer . Aberrant cell signalling is initiated via the overexpression of specific cytokines and chemokines produced by many different cell types in the tumour microenvironment (TME). Targeting this dysregulation represents a promising therapeutic strategy for novel and repurposed drugs to be utilised in combination with standard‐of‐care chemotherapy in CRC patients . C‐X‐C motif chemokine legend 8 ( CXCL8 ) is a signalling molecule elevated in the cancer setting both systemically and within the TME of several solid tumour types . CXCL8 functions as a neutrophil chemoattractant via their surface expression of CXCR2 . When CXCL8 binds to CXCR2 signal, transduction results in the promotion of angiogenesis, cell survival, migration, proliferation, and invasion . Downstream transcription factors and pathways which CXCL8 activates include mitogen‐activated protein kinase (MAPK), protein kinase B (Akt), extracellular‐signal‐regulated kinase (ERK), and signal transducer and activator of transcription 3 (STAT3), all of which have been linked to tumour progression and hyperactivity is associated with poor clinical outcomes . Although a prognostic role for CXCL8 in CRC already exists in the literature, there is limited evidence regarding the importance of the spatial distribution of CXCL8 expression within different compartments of the TME. Production of CXCL8 in the TME can be influenced by the inflammatory milieu including the presence of CXCL12, IL‐1, IL‐6, TNF‐α, and factors such as hypoxia and reactive oxygen species . Numerous cell types can produce CXCL8 including epithelial cells, endothelial cells, tumour‐associated macrophages, cancer‐associated fibroblasts (CAFs), and tumour cells themselves . Therefore, the spatial distribution of CXCL8 expression in the TME could be important in furthering our understanding of the biology of CXCL8 in driving cancer progression. This study aimed to investigate the prognostic role of CXCL8 within the tumour epithelium and tumour‐associated stroma utilising a retrospective cohort of stage I–IV CRC patients undergoing surgery with curative intent. Expression of CXCL8 mRNA within each TME compartment (tumour epithelium/tumour‐associated stroma) was assessed for association with clinical characteristics including survival outcomes and tumour histology. Furthermore, the underlying biology of patients with high levels of CXCL8 was investigated using matched mutational data. A second retrospective cohort of synchronously resected primary tumours and matched liver metastases was assessed via immunohistochemistry (IHC) for protein expression of the CXCL8 receptor CXCR2 to determine any association with clinicopathological features in the metastatic setting.
Patient cohorts A retrospective cohort (cohort 1) consisting of 1,030 stage I–IV CRC patients undergoing potentially curative resection across Greater Glasgow and Clyde (GGC) hospitals between 1997 and 2007 was utilised in the study. Tumours were staged with the fifth edition of TNM staging and clinical follow‐up data were last updated in 2017 from NHS GGC Safe Haven data. At this time, 324 patients (32%) had died of primary CRC, 332 patients (32.8%) had died of other causes, and 355 patients (35.1%) were still alive. Cancer‐specific survival (CSS) (date of surgery until the last follow‐up) was used as a clinical endpoint throughout this study. Mean follow‐up time was 139 months. Patients were excluded from the analysis if they received neoadjuvant therapy, emergency surgery, and/or died within 30 days of surgical procedure. Due to the limited tissue left in the blocks of each tissue microarray (TMA), valid cores were only available for 438 patients from the cohort. This study was approved by the West of Scotland Research Ethics Committee (16/WS/0207) and patient information is held within the Glasgow and Clyde Safe Haven (12/WS/0142). A second cohort (cohort 2) consisted of 46 stage IV CRC patients who underwent synchronous resection of colorectal primary tumour and liver metastases between April 2002 and June 2010 at Glasgow Royal Infirmary. Information on date and cause of death was determined via access to the NHS GGC clinical portal. Clinical follow‐up data were last updated in 2017 and at this time the mean survival time was 40.14 months and 40% of patients ( n = 24) were alive, 50% ( n = 30) had died of cancer, and 5% ( n = 3) had died of unrelated causes. Due to the size of cohort 2, no exclusion criteria were applied prior to statistical analysis. This study was approved by the West of Scotland Research Ethics Committee (#357). RNAscope ® RNA in situ hybridisation using RNAscope (ACD Bio, Newark, CA, USA) was performed at the CRUK Beatson Institute (performed by CN) on previously constructed TMAs consisting of patients from cohort 1 to detect the PPIB housekeeping gene and CXCL8 mRNA. Staining was performed using a Leica Bond Rx system (Leica Biosystems, Wetzlar, Germany). Expression was quantified using Halo digital pathology software (Indica Labs, Albuquerque, NM, USA) in copies per μm 2 . A classifier was built to distinguish between tumour epithelium and stromal‐rich areas of the TMA cores. Raw scores for CXCL8 expression within the tumour and stroma were normalised to PPIB scores. Cut‐offs for high and low expression were determined using survminer, survival, maxstat, and tidyverse packages in R studio (v1.3) based on CSS (RStudio, Boston, MA, USA). A subset of the cohort ( n = 12) was dual stained/probed for alpha‐smooth muscle actin (α‐SMA) protein and CXCL8 RNA to confirm the presence of CXCL8 mRNA within the stroma. Co‐localisation staining was performed on 4‐μm formalin‐fixed paraffin‐embedded sections (FFPE) full sections which had previously been baked at 60 °C for 2 h. The staining was performed on a Leica Bond Rx strictly following Bio‐Techne's co‐localisation kit/protocol. The RNAscope probe used was Hs‐IL8 (310388, Bio‐Techne, Minneapolis, MN, USA) and the α‐SMA antibody clone D4K9N (19245, Cell Signaling Technologies, Boston, MA, USA). Immunohistochemistry Immunohistochemical staining was performed on FFPE resections from cohort 2 to assess the expression of CXCR2 on immune cells in the TME. Staining was performed using the Leica Bond Rx autostainer. Sections underwent on‐board dewaxing (AR9222, Leica) followed by heat‐induced epitope retrieval using ER2 retrieval buffer (AR9640, Leica). The sections were stained with CXCR2 primary antibody (PA1‐20673, Thermo Fisher Scientific, Walton, MA, USA) at a dilution of 1:100 followed by rabbit envision secondary antibody (K4003, Agilent Technologies, Santa Clara, CA, USA). Sections were visualised using diaminobenzidine before counterstaining with haematoxylin using an Intense R Kit (DS9263, Leica Biosystems). Stained sections were scanned onto the Slidepath platform (Leica Biosystems, Milton Keynes, UK) using a Hamamatsu NanoZoomer (Hamamatsu, Welwyn Garden City, UK) for visualisation. Assessment of immune cell infiltration was performed at ×20 objective magnification using a point score digital algorithm available within the Slidepath platform, validated by 10% non‐automated point counts of the same area (performed by KAFP). Cells were counted in three different locations of each tumour within a 4‐μm 2 grid, and an average taken to account for heterogeneity. Only fields within the tumour (including cancer cell nests and surrounding tissue stroma) were counted. Scores were averaged and median values were utilised as a cut‐off for high and low expression. Histopathological phenotype assessments Tumour stroma percentage (TSP) assessment was performed as previously described . In brief, full face haematoxylin and eosin‐stained sections were assessed manually for the composition of stromal cells within the intra‐tumour area, performed by JHP and AKR with validation by JE and further validation on a subset of the cohort performed by a clinical pathologist (NNM). Tumours with >50% stromal volume were graded as high, and ≤50% was considered low. Klintrup–Makinen (KM) grade was determined as previously described . The invasive margin was analysed for the presence of immune cells, patients with a florid cup (3) or thin continuous band of cells (2) were considered high for immune influx. Tumours with a patchy band (1) or no immune cells present at the margin were considered low for immune infiltration. Mutational profiling Mutational profiling was performed on a subset of patients from cohort 1 ( n = 252). DNA was previously extracted from FFPE sections by NHS molecular diagnostics, Dundee. DNA quality and concentration were determined using the Qubit assay (Thermo Fisher). Sequencing was outsourced and performed by the Glasgow Precision Oncology Laboratory (GPOL) using a custom in‐house designed panel of 151 cancer‐associated genes run on a HiSeq4000 machine (Illumina, San Diego, CA, USA). The publicly available cBioPortal resource was utilised to validate findings in the Cancer Genome Atlas program (TCGA)/PanCancer Atlas adenocarcinoma cohort ( n = 594) available at https://www.cbioportal.org/ . Statistical analyses Data analysis of CXCL8 expression in cohort 1 was performed using IBM SPSS (Chicago, IL, USA). The relationship between tumour cell/stromal CXCL8 expression and CSS was assessed using Kaplan–Meier survival curves. Chi‐squared tests were utilised to assess associations between CXCL8 expression and clinicopathological features. Analysis of mutational profiling data was performed using Maftools in R Studio (v1.3, RStudio, PBC, Boston, MA, USA). Oncoplots and a forest plot were constructed to visualise differences in mutations between high and low stromal CXCL8 groups. Statistical significance was set to p < 0.05.
A retrospective cohort (cohort 1) consisting of 1,030 stage I–IV CRC patients undergoing potentially curative resection across Greater Glasgow and Clyde (GGC) hospitals between 1997 and 2007 was utilised in the study. Tumours were staged with the fifth edition of TNM staging and clinical follow‐up data were last updated in 2017 from NHS GGC Safe Haven data. At this time, 324 patients (32%) had died of primary CRC, 332 patients (32.8%) had died of other causes, and 355 patients (35.1%) were still alive. Cancer‐specific survival (CSS) (date of surgery until the last follow‐up) was used as a clinical endpoint throughout this study. Mean follow‐up time was 139 months. Patients were excluded from the analysis if they received neoadjuvant therapy, emergency surgery, and/or died within 30 days of surgical procedure. Due to the limited tissue left in the blocks of each tissue microarray (TMA), valid cores were only available for 438 patients from the cohort. This study was approved by the West of Scotland Research Ethics Committee (16/WS/0207) and patient information is held within the Glasgow and Clyde Safe Haven (12/WS/0142). A second cohort (cohort 2) consisted of 46 stage IV CRC patients who underwent synchronous resection of colorectal primary tumour and liver metastases between April 2002 and June 2010 at Glasgow Royal Infirmary. Information on date and cause of death was determined via access to the NHS GGC clinical portal. Clinical follow‐up data were last updated in 2017 and at this time the mean survival time was 40.14 months and 40% of patients ( n = 24) were alive, 50% ( n = 30) had died of cancer, and 5% ( n = 3) had died of unrelated causes. Due to the size of cohort 2, no exclusion criteria were applied prior to statistical analysis. This study was approved by the West of Scotland Research Ethics Committee (#357).
® RNA in situ hybridisation using RNAscope (ACD Bio, Newark, CA, USA) was performed at the CRUK Beatson Institute (performed by CN) on previously constructed TMAs consisting of patients from cohort 1 to detect the PPIB housekeeping gene and CXCL8 mRNA. Staining was performed using a Leica Bond Rx system (Leica Biosystems, Wetzlar, Germany). Expression was quantified using Halo digital pathology software (Indica Labs, Albuquerque, NM, USA) in copies per μm 2 . A classifier was built to distinguish between tumour epithelium and stromal‐rich areas of the TMA cores. Raw scores for CXCL8 expression within the tumour and stroma were normalised to PPIB scores. Cut‐offs for high and low expression were determined using survminer, survival, maxstat, and tidyverse packages in R studio (v1.3) based on CSS (RStudio, Boston, MA, USA). A subset of the cohort ( n = 12) was dual stained/probed for alpha‐smooth muscle actin (α‐SMA) protein and CXCL8 RNA to confirm the presence of CXCL8 mRNA within the stroma. Co‐localisation staining was performed on 4‐μm formalin‐fixed paraffin‐embedded sections (FFPE) full sections which had previously been baked at 60 °C for 2 h. The staining was performed on a Leica Bond Rx strictly following Bio‐Techne's co‐localisation kit/protocol. The RNAscope probe used was Hs‐IL8 (310388, Bio‐Techne, Minneapolis, MN, USA) and the α‐SMA antibody clone D4K9N (19245, Cell Signaling Technologies, Boston, MA, USA).
Immunohistochemical staining was performed on FFPE resections from cohort 2 to assess the expression of CXCR2 on immune cells in the TME. Staining was performed using the Leica Bond Rx autostainer. Sections underwent on‐board dewaxing (AR9222, Leica) followed by heat‐induced epitope retrieval using ER2 retrieval buffer (AR9640, Leica). The sections were stained with CXCR2 primary antibody (PA1‐20673, Thermo Fisher Scientific, Walton, MA, USA) at a dilution of 1:100 followed by rabbit envision secondary antibody (K4003, Agilent Technologies, Santa Clara, CA, USA). Sections were visualised using diaminobenzidine before counterstaining with haematoxylin using an Intense R Kit (DS9263, Leica Biosystems). Stained sections were scanned onto the Slidepath platform (Leica Biosystems, Milton Keynes, UK) using a Hamamatsu NanoZoomer (Hamamatsu, Welwyn Garden City, UK) for visualisation. Assessment of immune cell infiltration was performed at ×20 objective magnification using a point score digital algorithm available within the Slidepath platform, validated by 10% non‐automated point counts of the same area (performed by KAFP). Cells were counted in three different locations of each tumour within a 4‐μm 2 grid, and an average taken to account for heterogeneity. Only fields within the tumour (including cancer cell nests and surrounding tissue stroma) were counted. Scores were averaged and median values were utilised as a cut‐off for high and low expression.
Tumour stroma percentage (TSP) assessment was performed as previously described . In brief, full face haematoxylin and eosin‐stained sections were assessed manually for the composition of stromal cells within the intra‐tumour area, performed by JHP and AKR with validation by JE and further validation on a subset of the cohort performed by a clinical pathologist (NNM). Tumours with >50% stromal volume were graded as high, and ≤50% was considered low. Klintrup–Makinen (KM) grade was determined as previously described . The invasive margin was analysed for the presence of immune cells, patients with a florid cup (3) or thin continuous band of cells (2) were considered high for immune influx. Tumours with a patchy band (1) or no immune cells present at the margin were considered low for immune infiltration.
Mutational profiling was performed on a subset of patients from cohort 1 ( n = 252). DNA was previously extracted from FFPE sections by NHS molecular diagnostics, Dundee. DNA quality and concentration were determined using the Qubit assay (Thermo Fisher). Sequencing was outsourced and performed by the Glasgow Precision Oncology Laboratory (GPOL) using a custom in‐house designed panel of 151 cancer‐associated genes run on a HiSeq4000 machine (Illumina, San Diego, CA, USA). The publicly available cBioPortal resource was utilised to validate findings in the Cancer Genome Atlas program (TCGA)/PanCancer Atlas adenocarcinoma cohort ( n = 594) available at https://www.cbioportal.org/ .
Data analysis of CXCL8 expression in cohort 1 was performed using IBM SPSS (Chicago, IL, USA). The relationship between tumour cell/stromal CXCL8 expression and CSS was assessed using Kaplan–Meier survival curves. Chi‐squared tests were utilised to assess associations between CXCL8 expression and clinicopathological features. Analysis of mutational profiling data was performed using Maftools in R Studio (v1.3, RStudio, PBC, Boston, MA, USA). Oncoplots and a forest plot were constructed to visualise differences in mutations between high and low stromal CXCL8 groups. Statistical significance was set to p < 0.05.
Expression of CXCL8 within the stromal compartment is associated with reduced CSS After exclusion criteria were applied to the CRC cohort of 438 patients from cohort 1, 380 patients had valid CXCL8 scores and were included in downstream analyses (supplementary material, Figure ). Representative images of TMA cores negative, low, and high for CXCL8 expression are shown in Figure . Positive staining was detected within the tumour and stromal compartments and full sections from a subset of patients were dual stained/probed for α‐SMA via IHC and CXCL8 , with representative images of high/low expression in α‐SMA‐positive and ‐negative areas shown in Figure . The mean score for stromal CXCL8 was 0.79 copies per μm 2 ( n = 386) and for CXCL8 within tumour cells was 0.58 copies per μm 2 ( n = 387). Survminer cut‐off point analysis using the continuous variables determined the optimal cut‐off point for high and low expression to be 0.32 copies per μm 2 for stromal CXCL8 and 0.65 copies per μm 2 for tumour cell CXCL8 . This resulted in 194 patients classified as high for stromal CXCL8 and 192 classified as low. For tumour cell CXCL8 expression, 95 patients were classified as high and 290 fell into the low group. Pearson correlation analysis revealed a weakly positive correlation between CXCL8 expression counts from the stroma and tumour cell compartments ( r = 0.224, p < 0.001). Kaplan–Meier survival analysis identified a significant association between high CXCL8 mRNA expression within the stromal compartment and reduced CSS in the full cohort (HR = 1.904, 95% CI: 1.045–3.468, log‐rank p = 0.035) (Figure ). Patients classified as high for stromal CXCL8 ( n = 133) had a mean survival time of 148 (95% CI: 133–162) months compared to 169 (95% CI: 157–181) months observed in patients classified as low for stromal CXCL8 expression ( n = 137). This relationship was potentiated in patients with right‐sided colon disease (HR = 2.858, 95% CI: 1.247–6.554, log‐rank p = 0.009) (Figure ). In right‐sided cases, patients classified as high for stromal CXCL8 ( n = 56) observed a mean survival time of 140 (95% CI: 118–162) months compared with 181 (95% CI: 166–197) months in patients classified as low for expression of stromal CXCL8 ( n = 56). When cases were stratified by MMR status, there was a significant association between stromal CXCL8 expression and CSS in pMMR ( p = 0.021) but not dMMR cases ( p = 0.565); however, this may be due to limited patient numbers in the dMMR group (supplementary material, Figure ). Kaplan–Meier survival analyses showed that there was no association between CXCL8 expression within the tumour cell compartment and CSS in the full cohort (Figure ) or when the cohort was stratified to include only right‐sided tumours (Figure ). Stromal CXCL8 expression was associated with outcome at the univariate level ( p = 0.038), but was not found to be independently prognostic upon multivariate survival analysis of the full cohort ( p = 0.071) (supplementary material, Table ). High stromal CXCL8 expression is associated with unfavourable tumour histological features Chi‐squared tests were performed to determine any association between stromal CXCL8 expression and clinical characteristics/histological tumour features (Table ). High CXCL8 within the stromal compartment was significantly associated with a higher TSP ( p = 0.040) and higher frequency of tumour budding ( p = 0.002). There was a significant association between high stromal CXCL8 and Ki67 proliferation index, with the middle quartiles of Ki67 enriched for CXCL8 expression. This steady‐state level of tumour proliferation generally confers worse prognosis due to the provision of optimal conditions for angiogenesis and hypoxia. At higher levels of proliferation, the tumour can outgrow the blood supply and become necrotic, and lower levels of proliferation generally ameliorate the anti‐tumour immune response. In stromally dense tumours (>50% TSP), there was a significant elevation in tumour CXCL8 expression ( p = 0.010) and a trend towards higher stromal CXCL8 ( p = 0.067) when assessed via non‐parametric Kruskal–Wallis H tests (Figure ). The high frequency of tumour buds was associated with increased stromal CXCL8 expression ( p = 0.010) and to a lesser extent tumour CXCL8 mRNA copies ( p = 0.116) (Figure ). Patients with high stromal CXCL8 expression combined with high myeloid cell counts observe worse outcomes Given the prominent role of CXCL8 in neutrophil recruitment, stromal CXCL8 expression was investigated in combination with myeloid cell infiltrates (CD66b+ cells and CD68+ cells) and systemic neutrophil counts. Kaplan–Meier survival analysis revealed a significant association between stromal CXCL8 and CD68+ cell density, with a significant reduction in survival of patients whose tumours were high for both compared to those high for one and low for both markers (HR = 1.880, 95% CI: 1.299–2.720, p = 0.001) (Figure ). The mean survival time for patients with tumours low for both stromal CXCL8 and CD68+ infiltration was 167 (95% CI: 150–184) months compared to 156 (95% CI: 141–170) months for one high and 122 (95% CI: 104–141) months for both low. Similarly, patients with high tumour stromal CXCL8 and high systemic neutrophil counts observed worse outcomes than those high for one or low for both (HR = 2.037, 95% CI: 1.464–2.834, p < 0.001) (Figure ). Patients with high systemic neutrophil counts and high tumour stromal CXCL8 expression observed a mean survival time of 150 (95% CI: 138–161) months, compared with 122 (95% CI: 110–135) months for one high and 88 (95% CI: 59–118) months in patients low for both markers. There was no significant association between combined stromal CXCL8 and CD66b+ infiltrates and CSS (Figure ). Expression of stromal CXCL8 is associated with a distinct mutational background When the mutational background of patients from cohort 1 with the top 20 highest expression levels of stromal CXCL8 was analysed, the most frequently mutated gene was APC (75% of cases), followed by TP53 (65%) (Figure ). In terms of patients with the lowest tumour expression of stromal CXCL8 , APC and TP53 were mutated in 70% of cases (Figure ). KRAS was mutated in 55% of cases (Figure ). When Fishers' exact tests were performed to determine any differentially mutated genes between groups, DNA damage signalling kinase ATR (ataxia‐telangiectasia‐ and Rad3‐related) was significantly more likely to be mutated in the low stromal CXCL8 group ( p = 0.044) (Figure ). CREB‐binding protein ( CREBBP ) was mutated more frequently in tumours with high stromal CXCL8 and when further analysis was performed on the TCGA/PanCancer Atlas dataset ( n = 594) there was a significant association between CREBBP mutation and higher CXCL8 mRNA expression ( p < 0.001) (Figure ). CXCR2 expression is associated with increased stromal invasion in the metastatic setting In cohort 1, expression of stromal CXCL8 was significantly higher in patients with metastatic disease (Figure ). Therefore, a unique cohort of synchronously resected primary colorectal tumours and matched liver metastases (cohort 2) was stained for the main cognate receptor of CXCL8 , CXCR2, by IHC. Positive staining was identified amongst the inflammatory infiltrate of some tumours, as shown in representative images (Figure ). The number of CXCR2+ immune cells (mainly neutrophils) at the invasive edge of the primary tumour significantly correlated with CXCR2+ infiltrates at the margin of the matched liver metastases (rho = 0.612, p = 0.003) (Figure ). In cohort 2, there was a trend towards increased infiltration of CXCR2+ cells in patients with high stromal invasion in primary tumours ( p = 0.088) and a significant increase in matched liver metastases ( p = 0.037) (Figure ). Representative images of stroma‐rich primary and metastatic tumours are shown in supplementary material, Figure .
CXCL8 within the stromal compartment is associated with reduced CSS After exclusion criteria were applied to the CRC cohort of 438 patients from cohort 1, 380 patients had valid CXCL8 scores and were included in downstream analyses (supplementary material, Figure ). Representative images of TMA cores negative, low, and high for CXCL8 expression are shown in Figure . Positive staining was detected within the tumour and stromal compartments and full sections from a subset of patients were dual stained/probed for α‐SMA via IHC and CXCL8 , with representative images of high/low expression in α‐SMA‐positive and ‐negative areas shown in Figure . The mean score for stromal CXCL8 was 0.79 copies per μm 2 ( n = 386) and for CXCL8 within tumour cells was 0.58 copies per μm 2 ( n = 387). Survminer cut‐off point analysis using the continuous variables determined the optimal cut‐off point for high and low expression to be 0.32 copies per μm 2 for stromal CXCL8 and 0.65 copies per μm 2 for tumour cell CXCL8 . This resulted in 194 patients classified as high for stromal CXCL8 and 192 classified as low. For tumour cell CXCL8 expression, 95 patients were classified as high and 290 fell into the low group. Pearson correlation analysis revealed a weakly positive correlation between CXCL8 expression counts from the stroma and tumour cell compartments ( r = 0.224, p < 0.001). Kaplan–Meier survival analysis identified a significant association between high CXCL8 mRNA expression within the stromal compartment and reduced CSS in the full cohort (HR = 1.904, 95% CI: 1.045–3.468, log‐rank p = 0.035) (Figure ). Patients classified as high for stromal CXCL8 ( n = 133) had a mean survival time of 148 (95% CI: 133–162) months compared to 169 (95% CI: 157–181) months observed in patients classified as low for stromal CXCL8 expression ( n = 137). This relationship was potentiated in patients with right‐sided colon disease (HR = 2.858, 95% CI: 1.247–6.554, log‐rank p = 0.009) (Figure ). In right‐sided cases, patients classified as high for stromal CXCL8 ( n = 56) observed a mean survival time of 140 (95% CI: 118–162) months compared with 181 (95% CI: 166–197) months in patients classified as low for expression of stromal CXCL8 ( n = 56). When cases were stratified by MMR status, there was a significant association between stromal CXCL8 expression and CSS in pMMR ( p = 0.021) but not dMMR cases ( p = 0.565); however, this may be due to limited patient numbers in the dMMR group (supplementary material, Figure ). Kaplan–Meier survival analyses showed that there was no association between CXCL8 expression within the tumour cell compartment and CSS in the full cohort (Figure ) or when the cohort was stratified to include only right‐sided tumours (Figure ). Stromal CXCL8 expression was associated with outcome at the univariate level ( p = 0.038), but was not found to be independently prognostic upon multivariate survival analysis of the full cohort ( p = 0.071) (supplementary material, Table ).
CXCL8 expression is associated with unfavourable tumour histological features Chi‐squared tests were performed to determine any association between stromal CXCL8 expression and clinical characteristics/histological tumour features (Table ). High CXCL8 within the stromal compartment was significantly associated with a higher TSP ( p = 0.040) and higher frequency of tumour budding ( p = 0.002). There was a significant association between high stromal CXCL8 and Ki67 proliferation index, with the middle quartiles of Ki67 enriched for CXCL8 expression. This steady‐state level of tumour proliferation generally confers worse prognosis due to the provision of optimal conditions for angiogenesis and hypoxia. At higher levels of proliferation, the tumour can outgrow the blood supply and become necrotic, and lower levels of proliferation generally ameliorate the anti‐tumour immune response. In stromally dense tumours (>50% TSP), there was a significant elevation in tumour CXCL8 expression ( p = 0.010) and a trend towards higher stromal CXCL8 ( p = 0.067) when assessed via non‐parametric Kruskal–Wallis H tests (Figure ). The high frequency of tumour buds was associated with increased stromal CXCL8 expression ( p = 0.010) and to a lesser extent tumour CXCL8 mRNA copies ( p = 0.116) (Figure ).
CXCL8 expression combined with high myeloid cell counts observe worse outcomes Given the prominent role of CXCL8 in neutrophil recruitment, stromal CXCL8 expression was investigated in combination with myeloid cell infiltrates (CD66b+ cells and CD68+ cells) and systemic neutrophil counts. Kaplan–Meier survival analysis revealed a significant association between stromal CXCL8 and CD68+ cell density, with a significant reduction in survival of patients whose tumours were high for both compared to those high for one and low for both markers (HR = 1.880, 95% CI: 1.299–2.720, p = 0.001) (Figure ). The mean survival time for patients with tumours low for both stromal CXCL8 and CD68+ infiltration was 167 (95% CI: 150–184) months compared to 156 (95% CI: 141–170) months for one high and 122 (95% CI: 104–141) months for both low. Similarly, patients with high tumour stromal CXCL8 and high systemic neutrophil counts observed worse outcomes than those high for one or low for both (HR = 2.037, 95% CI: 1.464–2.834, p < 0.001) (Figure ). Patients with high systemic neutrophil counts and high tumour stromal CXCL8 expression observed a mean survival time of 150 (95% CI: 138–161) months, compared with 122 (95% CI: 110–135) months for one high and 88 (95% CI: 59–118) months in patients low for both markers. There was no significant association between combined stromal CXCL8 and CD66b+ infiltrates and CSS (Figure ).
CXCL8 is associated with a distinct mutational background When the mutational background of patients from cohort 1 with the top 20 highest expression levels of stromal CXCL8 was analysed, the most frequently mutated gene was APC (75% of cases), followed by TP53 (65%) (Figure ). In terms of patients with the lowest tumour expression of stromal CXCL8 , APC and TP53 were mutated in 70% of cases (Figure ). KRAS was mutated in 55% of cases (Figure ). When Fishers' exact tests were performed to determine any differentially mutated genes between groups, DNA damage signalling kinase ATR (ataxia‐telangiectasia‐ and Rad3‐related) was significantly more likely to be mutated in the low stromal CXCL8 group ( p = 0.044) (Figure ). CREB‐binding protein ( CREBBP ) was mutated more frequently in tumours with high stromal CXCL8 and when further analysis was performed on the TCGA/PanCancer Atlas dataset ( n = 594) there was a significant association between CREBBP mutation and higher CXCL8 mRNA expression ( p < 0.001) (Figure ).
expression is associated with increased stromal invasion in the metastatic setting In cohort 1, expression of stromal CXCL8 was significantly higher in patients with metastatic disease (Figure ). Therefore, a unique cohort of synchronously resected primary colorectal tumours and matched liver metastases (cohort 2) was stained for the main cognate receptor of CXCL8 , CXCR2, by IHC. Positive staining was identified amongst the inflammatory infiltrate of some tumours, as shown in representative images (Figure ). The number of CXCR2+ immune cells (mainly neutrophils) at the invasive edge of the primary tumour significantly correlated with CXCR2+ infiltrates at the margin of the matched liver metastases (rho = 0.612, p = 0.003) (Figure ). In cohort 2, there was a trend towards increased infiltration of CXCR2+ cells in patients with high stromal invasion in primary tumours ( p = 0.088) and a significant increase in matched liver metastases ( p = 0.037) (Figure ). Representative images of stroma‐rich primary and metastatic tumours are shown in supplementary material, Figure .
This study has strongly implicated CXCL8 mRNA expression within the tumour‐associated stroma as a marker of poor prognosis in CRC. Data from the literature corroborate these findings. A large meta‐analysis investigating both CXCL8 protein and RNA within the tissue and serum of over 1,500 CRC patients identified a strong association between high expression and poor clinical outcome . The current study highlights the importance of the spatial distribution of CXCL8 expression within the TME, as stromal but not tumour cell CXCL8 expression was significantly prognostic. Furthermore, the findings from this study suggest that the prognostic influence of stromal CXCL8 was potentiated in patients with right‐sided colonic tumours. Sidedness is an important clinical characteristic, with these patients more likely to have an elevated systemic inflammatory response and worse outcome . There is evidence that the inflammatory component of the TME is different in right‐sided tumours compared to other disease sites. The potentiation of the prognostic nature of CXCL8 observed in this study is likely attributed to an immune‐related effect. Previous studies have shown that right‐sided tumours have an increased influx of immune cells, particularly CD8+ T cells, and higher expression of PD‐L1 on the surface of tumour cells . In a study of urethral and renal cell carcinomas, both circulating CXCL8 protein and CXCL8 RNA were elevated in the peripheral blood monocular cells of patients who did not respond to PD‐1 checkpoint inhibitors . We hypothesise that one mechanism of tumour promotion induced by CXCL8 is by its augmentation of the expression of PD‐L1 on tumour cells. Therefore, inhibition of CXCL8 in combination with checkpoint inhibitors represents an interesting approach which merits investigation in preclinical studies. In terms of the mechanisms of tumour promotion, there is evidence that secretion of CXCL8 promotes many of the hallmarks of cancer. In vitro studies of pancreatic cancer have shown that CXCL8 works synergistically with CXCL12 to promote angiogenesis and invasion . Similarly, in prostate cancer cell lines, CXCL8 contributed to increased proliferation and invasion . In the present study, high expression of CXCL8 within the stroma was significantly associated with adverse histological features including high presence of tumour buds, stromal invasion, and moderate Ki67 proliferation index. Stroma‐rich, high‐budding phenotypes confer poor prognosis and further work is required to elucidate if inhibition of CXCL8 or associated receptor/s could dampen stromal recruitment and invasion . Previous data from gastric cancer models have shown that CXCL8 specifically derived from CAFs was implicated in driving resistance to chemotherapy . In this study, combined scores of stromal CXCL8 and tumour‐infiltrating macrophages (CD68+ cells) or systemic neutrophil counts significantly stratified patient survival. Patients high for both markers had reduced CSS compared to those low for one or both markers. Interestingly, there was no association between combined stromal CXCL8 and CD66b+ cell infiltrates; however, this may be due to CD66b being a more general granulocyte marker rather than neutrophil specific. We hypothesise that CXCL8 produced by CAFs recruits neutrophils to the TME and fosters an immunosuppressive pro‐TME. Combined treatment of CXCL8 /CXCR2 inhibition with a standard‐of‐care chemotherapy represents an exciting approach to investigate in preclinical models of CRC. The patients with high CXCL8 expression in cohort 1 observed a higher frequency of mutations in the CREBBP gene. Mutation of CREBBP has been previously linked to worse prognosis in solid tumour types including head and neck cancer and CRC. CREBBP is an important signalling molecule involved in the regulation of various immune cell populations. It is important for IL10 production, regulatory T cell functions, or can conversely crosstalk with NF‐κB to promote transcription of pro‐inflammatory processes. Overactivation of signalling via mutation is likely linked to an immunomodulatory interaction with CXCL8 . Another hallmark of cancer promoted by CXCL8 secretion is metastases and here we showed that CXCL8 mRNA expression was enriched in CRC patients with stage IV disease. Previous literature has implicated CXCL8 in promoting metastasis in pancreatic cancer, ovarian cancer, and CRC. In CRC cell lines, CXCL8 drives EMT via a PI3K/AKT/NF‐κB axis . In mouse models of pancreatic ductal carcinoma, CXCR2 inhibition resulted in reduced metastases and improved survival . In this study, CXCR2+ cells were enriched in stroma‐dense primary and secondary tumours in the second cohort of matched colorectal primary tumours and liver metastases. Further work is required to determine if CXCL8 /CXCR2 are responsible for driving this unfavourable stroma‐rich phenotype. Limitations of this study include a lack of mechanistic work and future experiments should include in vitro / in vivo experiments to determine the effect of CXCL8 ablation in fibroblasts cocultured with CRC cell lines/organoids and preliminary pathway inhibition to determine the therapeutic potential of drugs which target CXCL8 /CXCR2 in right‐sided colon cancer models and in the metastatic setting. Multiplex immunofluorescence staining should be employed to explore the influence of the spatial distribution of CXCL8 on survival outcomes in more detail, for example by assessing the co‐localisation of α‐SMA and CXCL8 . To conform to REMARK guidelines, future experiments should also include the utilisation of a validation cohort to confirm the findings of this study. The cohorts utilised in this study lacked granularity of treatment data, so it was not possible to correlate CXCL8 expression with response to chemotherapy/chemoradiotherapy type or duration. A validation cohort with full treatment data should be sought for future work. We were unable to identify an antibody of sufficient specificity and quality to detect CXCL8 at the protein level to confirm translation from mRNA, which represents another limitation of the current study. To conclude, this study has demonstrated a clear role for CXCL8 in the CRC setting. From data acquired thus far, we hypothesise that CXCL8 is involved in promoting a stroma‐rich microenvironment which aids tumour immune evasion and EMT, and that targeting this pathway could be therapeutically beneficial in a subset of patients with right‐sided tumours.
KAFP performed IHC, scored RNAscope and IHC, performed data analysis and wrote the manuscript. JAQ generated experimental data and edited the manuscript. CN performed RNAscope and edited the manuscript. JI performed IHC for the immune cells and edited the manuscript. HCvW assessed tumours for tumour budding and was involved in pathological assessment. DC, SR and the GPOL Group performed mutational analysis. JH constructed TMAs, performed MMR analysis and edited the manuscript. NNM is a consultant pathologist who provided all pathological training and double scoring, ensured quality of pathological assessment and edited the manuscript. CSDR and PGH are consultant colorectal surgeons who established clinical cohorts and built clinical databases. DCM analysed the data and edited the manuscript. JHP and AKR performed TSP and KM assessment for the Glasgow Microenvironment Score. CWS performed data analysis and was involved in writing and editing of the manuscript. JE conceived the study, is grant PI and was involved in pathological assessment, data analysis and editing of the manuscript.
Figure S1. Patient inclusion criteria Figure S2. The prognostic effect of high stromal CXCL8 expression shows a similar trend in both pMMR and dMMR disease Figure S3. Intra‐tumour stromal invasion Table S1. Univariate and multivariate survival analyses Click here for additional data file.
|
|
A nationwide survey on clinical neurophysiology education in Italian schools of specialization in neurology | 9d0b2aa0-c32a-4a3a-b1d5-1b91d0f0f8fe | 8654600 | Physiology[mh] | Clinical neurophysiology (CN) according to the International Federation of Clinical Neurophysiology (IFCN) is a “medical specialty concerned with function and dysfunction of the nervous system caused by disorders of the brain, spinal cord, peripheral nerve and muscle, using physiological and imaging techniques to measure nervous system activity” ( http://www.ifcn.info ). Conventional neurophysiological techniques include two main areas: studies investigating brain activity: electroencephalography (EEG) and those investigating the peripheral nervous system: nerve conduction studies (NCS) and electromyography (EMG). In the modern era, neurophysiological methods have greatly expanded to include techniques traditionally used in daily clinical practice (EEG, NCS, EMG, evoked potential studies, polysomnography and assessment of sleep disorders, vascular sonography), as well as emerging diagnostic methods, including nerve sonography, vagal nerve stimulation (VNS) for epilepsy, exercise testing for muscle fatigue, intra-operative monitoring (IOM) and neurophysiological assessment of movement disorders . In our experience, during their stay in the neurology unit, each hospitalized patient undergoes at least one neurophysiological test. Clinicians frequently prescribe neurophysiological investigations also for neurological outpatient diagnostic assessment. Even though no comprehensive national data specify the number of outpatient neurophysiological tests conducted per year in Italy, data are available for some regions. For instance, in Lombardy in 2017, the national health system provided more than 2 million neurological visits and tests, corresponding to euro 35 million in revenue. Neurophysiological tests account for more than a half of this revenue approaching 18 million euros (Table ). Adding to the problem concerning the many neurophysiological tests neurologists need to be familiar with, in the past few years, many reports using neurophysiological techniques as therapeutic tools appeared. Published papers now increasingly recognize the emerging field of non-invasive brain stimulation (including repetitive magnetic stimulation, rTMS, and transcranial direct current stimulation, tDCS) as safe treatments for several neurological and neuropsychiatric diseases , ranging from chronic pain and movement disorders, to drug-resistant depression and cognitive enhancement . As treatment options for movement disorders, invasive brain stimulation has rapidly evolved, with new neurosurgical methods, anatomical targets and neurophysiological markers . Despite the importance of CN in neurological clinical practice, few published data refer to education in this field during postgraduate neurological training demonstrating wide variability in different countries. In 21/32 (66%) of European countries, CN belongs in the neurology residency program . Conversely in Spain, Portugal, the UK, Finland, Sweden, and Norway, CN is considered a different medical specialty. In the USA, CN is a subspecialty: neurologists, child neurologists, or psychiatrists can acquire CN certification usually through a 1-year fellowship . Before 2017, in Italy, postgraduate medical students studied CN as an independent 5-year residency program: during their first 2 years training, residents usually acquired general neurological practice skills, whereas in the last 3 years, they focused on neurophysiological techniques among other subspecialties. After 2017, CN was integrated in a 4-year neurology residency program. The rapidly expanding neurological sciences and the increased pressure in each subspecialty area on the program led Italian neurology residents to have an enormous amount of information to learn. No published study has evaluated the educational level in CN for neurology residents in Italy but the European Training Requirements for Neurology of the European Board and Section of Neurology (U.E.M.S.) are quite demanding. Knowing more about CN training in Italian postgraduate specialization schools in neurology during residency would help plan strategies for updating them to fit in with today’s neurologists’ increasingly technical needs. Our study aimed to conduct a nationwide web-survey to snapshot the neurophysiological training provided by Italian specialization schools in neurology.
We designed a single-page, Internet-based survey comprising 13 multiple choice categorical and interval scale questions. Italian neurology specialization school directors were contacted via e-mail and invited to complete the online form. The survey addressed the following questions: geographical location of the specialization school and structural organizations in neurophysiology; time dedicated to each CN subspecialty; indirect signs assessing the discipline’s importance (number of residents who attempted extra residential courses, gained certification or obtained recognitions; CN test assessed during the final examination). The full survey is available as supplemental material. Data were segregated by responses and each item was assessed with descriptive statistics. The survey was available online from 1st March to 30th April 2021 for a total 61 days.
Of the 42 Italian schools of specialization in neurology contacted, 35 (83.3%) answered. Less than half (40%) were from Northern Italy. About two thirds of the centers had a Unit or a Section of CN, autonomous and formally separated from the Unit of Neurology (Fig. ). Despite differences, the most studied CN techniques were EEG and EMG; the mean time spent in EEG and EMG training was 6 months, for each technique (Fig. ). The specialization schools in neurology devoted less time to multimodal evoked potentials (EPs), ultrasound sonography (US), and intra-operative monitoring (IOM). About 60% of the interviewed centers reported less than 3 months spent for training in EPs, a percentage rising to 68.6% for US and to 88.6% for IOM techniques, including deep brain stimulation (DBS) for Parkinson’s disease (Fig. ). When asked about how the specialization school during residency objectively evaluated technical requirements, 77.1% of the centers reported that only four residents, or fewer, participated in the past 5 years (2016–2021) in the Examination in Neurophysiology held by the Italian Society of Clinical Neurophysiology (“Certificazione Unica in Neurofisiologia”; Fig. ). Only four centers (11.4%) declared that final examination during residency requires specialization students to discuss a neurophysiological test; in most schools surveyed, preparation was non-objectively assessed during the training period, without any examination (40.0%), or not assessed at all (11.4%). Accordingly, students’ interest for Congresses or Webinars on Neurophysiology, both at a national or international level, was extremely limited, with a mean of 2–4 residents per school participating in the entire period considered (2016–2021). Finally, surprisingly few residents in neurology achieved awards for studies or publications in neurophysiology fields (none in 34.3% and less than two in 40% of the cases, during the timeline 2016–2021).
Our national survey suggests that curricula in Italian specialization schools in neurology lack standardized requirements. Equally disconcerting is the wide variability among training programs (especially concerning time and neurophysiological service involved) and the limited training received on multi-modal evoked potentials, intra-operative monitoring (IOM) and sonography compared with other neurophysiological techniques. Hence, no standardized CN training is yet available and when provided its duration differs among Centers, in Italy as well as in other European countries. Our findings are hard to compare with those in other countries because similar studies are still lacking even if the European Training Requirements for Neurology of the U.E.M.S. are explicitly related to the specific requirements. Although it has recently been proposed in the USA , no standardized curriculum in clinical neurophysiology, during residency in neurology, exists so far in European countries. Another critical concern during residency is external rotations, including disciplines not directly related to neurology: the conflict is to provide exposure to neighboring disciplines, while allowing sufficient time for the clinical neurophysiology core curriculum . For instance, especially in the first 2 years of training, up to 6 months each year are devoted to rotation in internal medicine units. Another reason why specialization school curricula during residency need updating is the growing need for hyper-specialized neurophysiologists due to recent advances in Telemedicine, a requirement that has gained importance during the COVID-19 pandemic outbreak . Finally, another critical concern is that the recent COVID-19 pandemics have rapidly changed our knowledge about neuroinfectious diseases, prompting us to re-consider safety criteria, protocols and recording standards in clinical neurophysiology . Our data can hardly be compared to those described by other surveys in different countries, owing to differences in the duration of residency courses in neurology and training in neurophysiopathology; nor did other surveys evaluate training in specific technical fields, such as multimodal potentials and intra-operative monitoring. In the USA, Daniello and Weber recently developed a survey for program directors asking about confidence in neurophysiology knowledge, expressed as the percent of graduates reaching level 4 ACGME (American Council of Graduate Medical Education) milestones in EEG and EMG . They reported that up to a quarter of residents may graduate not meeting level 4 ACGME milestones (i.e., the highest level of expertise in electromyography), but this American survey left unassessed the confidence in other neurophysiological techniques (e.g., vascular sonography or multimodal evoked potentials). In Europe, Kleineberg and co-workers reported that the learning method in neurology and clinical neurophysiology significantly differs among countries, from a brief theoretical course to a defined minimum number of investigations to be performed ; certifications in clinical neurophysiology are often granted by different societies, with different standards, depending on the sub-specialty considered (sonography, EEG, EMG, sleep, neurovascular procedures). The main limitation of our study is the target: each center was represented by the Director of the Neurology Unit, with no question directly reserved to neurology residents or students: the type of questionnaire administered also neglected to assess their satisfaction and opinion . Second, the fundamental and interplaying role of the neurophysiologist technician has not been investigated in detail: the technician can perform almost all reported examinations, apart from needle eletromyography and invasive or non-invasive brain stimulation, but the final electrophysiological diagnosis and therapeutic approaches are devoted to the physician. Based on the results of the present survey, and in line with other countries, we propose a 2-year, CN training following the residency in neurology (or neighboring disciplines). In conclusion, our findings underline the need to define educational and homogeneous training standards for postgraduate clinical neurophysiology in Italy and at international level.
|
Clinical Simulation in the Training of Obstetrics and Gynecology Resident from the Perspective of Medical Residency Programs | b6ede2a7-a32b-4d4d-9346-707a8621f3a2 | 10281772 | Gynaecology[mh] | Historically, traditional methodology based on Cartesian thought has guided the education of health professionals, marked by a fragmented and reductionist approach. The search for technical efficiency and for specialized knowledge has led to the emergence of several changes within the educational institutions as well as on the educational propositions. Such changes have equally produced effects in the teaching and learning dynamic, in which the lecturer performs as a content transmitter while the student just plays the role of a spectator. A system that remained unaltered for the past 100 years, notwithstanding the important changes in healthcare. Lately, there has been a change from the traditional Halstedian training model – “see one, do one, teach one” – to a more contemporary model of Based in Competence Medical Education – BCME. A Medical Education founded on competencies becomes popular all over the world as a new approach in education and evaluation of the novice physician. The Entrustable Professional Activity – EPA – a concept brought about by Ten Cate and Scheele, 2007–emerges within that context to fill in the gap between competence-guided education and the clinical praxis. In clinical practice the competences are intertwined in a complex way so that they are less explicit and measurable. A reliable professional activity is one that may be entrusted to a person once that person has achieved the necessary competence. The EPAs represent the professional's daily activity, which means they are observable, measurable entities that can be the focus of evaluation. Therefore, thinking that teaching-and-learning process within a perspective of construction of knowledge – in which resident and professor take effective participation – implies vertically substituting both the memorizing-of-information process and the fragmented transfer of knowledge by a praxis that gathers knowledge through an interdisciplinary posture. In that regard, one values the adoption of methods that encourage students to effectively participate throughout the process. The simulation method is among those known as active methodologies. Medical simulation may be an ancient art. However, it is a young science that has just held a position at higher education institutions. Simulation uses technology and has tools like simulators, and yet these last ones do not encompass the meaning of simulation despite of being part of it. Simulation also favors the development of competencies related to clinical procedure pertaining to the professional praxis. It also goes beyond the technical and technological aspects to reach the development of analysis, synthesis, and the decision-making process. In the United States, Canada, and Europe, several higher education institutions have simulation centers where that methodology is explored and widespread. In Brazil, it is possible to notice a greater adhesion to simulation from private and public institutions, as well as an increasing tendency to build simulation centers. However, the high costs demanded to build facilities, to acquire simulators, and to hire skilled personnel seem to hinder that expansion. Notwithstanding those factors, simulation has become popular in the medical field as a complementary means to the traditional training in patients, by improving the abilities while favoring doing “the real thing” in a safe learning environment. While pondering the national scenario regarding the use of Clinical Simulation within the medical postgraduation courses, a worry emerged concerning the way that tool is employed throughout the Medical Residency Programs, specially those of Obstetrics and Gynecology. The primary assumption was that Clinical Simulation is comprehended by supervisors of the Medical Residency Programs of Obstetrics and Gynecology as an effective pedagogical tool in the residents learning process, though not very used. Therefore, the purpose of this study was to analyze the role given by the program supervisors to the Clinical Simulation applied to the training of residents in Obstetrics and Gynecology in the city of São Paulo. A cross-sectional descriptive, qualitative, and exploratory study was conducted. The research took place in the city of São Paulo by interviewing 10 program supervisors among 18 who were present at data collection time. The physicians interviewed supervise a total of 358 residents, 72% of the total number of trainees in Gynecology and Obstetrics in the city of São Paulo. As of the seventh interview, a saturation point of data collection was noticed once information reoccurred. However, it was decided to continue interviewing up to the tenth interview aiming to gather a diversity of institutional features. First section of data collection used a questionnaire composed of closed questions to characterize the survey participants. The second section consisted of an interview intended to apprehend the role supervisors ascribed to clinical simulation. Data analysis of the literal transcription of the semi-structured interviews was performed and the results were analyzed by means of a three-stage content analysis namely pre-analysis, exploring the material, and treatment of results. Pre-analysis involved a fluctuating reading of all transcripted material obtained from the interviews, which allowed a better comprehension of the context as well as assimilation of impressions and trends that were found. A session of repeated reading of material was followed by the identification of Context Units (CU), that was guided by the core theme The role of simulation in OG Residency. CU are understood as broader and more contextualized parts of all that was said related to that theme, and that was considered essential to the necessary analysis and interpretation of texts to be deciphered. Based on the CUs one could get to the Register Units (RU) as “the smaller part of content whose occurrence is registered according to the categories found. ” A categorization process followed the defining of UC and UR. Categorization process is understood as “a classification operation of constituent elements of a set by differentiation, followed by an analogy-based regrouping according to defined criteria. ” To get to the categories and subcategories the semantic process was applied by grouping the RUs interpretations. Both categories and subcategories came forth from what was said by the interviewees. OG Residency Programs of diverse natures were included, such as those from universities, and from nonprofit hospitals owned by federal, municipal or state public administration, as well as from philanthropic hospitals. Each participant received an interviewee's code that ranged between 1 and 10 to assure anonymity. Among the institutions, six are public and four are philanthropic. As for the number of vacant posts accredited at the Medical Residency National Committee (MRNC), the average was 12 vacant posts per year (six at minimum and 20 at maximum). All participant institutions either hold their own medical internship program or provide a training field to another institution's internship. Characterization is presented on . As for the supervisors' profiles, most of them were male doctors, aged between 40 and 50, with an academic title, as one can see on . As initial findings all supervisors considered that Clinical Simulation plays a relevant role in Obstetrics and Gynecology Medical Residency Programs, according to what is said in the following transcript: Personally, I consider Realistic Simulation very important...quite inexorable, a matter of time to evolve to that point [E6 ]. As for the acquisition of abilities, some research indicate that simulation could be superior to traditional medical. Those professionals who work as OG educators must study simulation and certainly embody it in their students and residents educational processes. In the United States the use of simulation is among the criteria set to accredit Medical Residency Programs, which corroborates the importance of extensively using it to improve performance of specialists during their technical procedures. Analysis of interviews identified 58 context unities and 78 register unities. Among the register unities, 9 categories and 11 subcategories emerged, according to . Simulation appears as a teaching and learning process complementary tool in OG Residency, able to assist in the resident professional development. In the last decades, OG international and national societies have encouraged the use of Simulation as a complementary tool in the teaching and learning process. In 2007 the American College of Obstetricians and Gynecologists (ACOG) acknowledged simulation as a valuable educational element in undergraduate and graduate studies. Simulation-based methods offer medical students the opportunity to obtain key qualities at the working place, such as confidence, knowledge, skills, and the appropriate behavior able to offer a high-quality service to the patient within a safe learning environment. Among the highlighted options, supervisors emphasized that simulation may homogenize teaching and learning opportunities. Thereby, the use of simulation seems to be significant, specially nowadays when health services make changes in health care while reduce length of hospital stay, which limits bedside learning opportunities. Such circumstance entails curtailment of occasions when residents could be in touch with risky situations and procedures. The possibility of training rare procedures was also emphasized by supervisors. Simulation may protect against unnecessary exposure to a variety of situations, which represents an increasing need due to limited clinical training opportunities. Supervisors also emphasized the possibility of unlimited repetitions of procedures . Simulations may also allow deliberated practice, which could be defined as the engagement of students in repeating the abilities thoroughly, focusing on progressive exercises and informative feedback. Deliberated practice is essential in cases whose procedures are so rarely performed that few professionals could actually master the necessary abilities without having practice and feedback at a non-clinical environment. Such rare procedures have usually been associated with high-risk situations, which lead to medical errors. Deliberated practice performs a main role in preparing professionals for critical events, besides being regarded as a most powerful indicator of the specialist's performance when compared with experience and academic aptitude. Supervisors also emphasized the residents' self-confidence training , as it allows greater confidence in their abilities. Humes et al report that resident doctors felt more confident about their abilities after performing a vaginal hysterectomy training in a uterus model by using a sponge and a PVC pipe. According to the interviewees, the possibility to have a safe teaching and learning environment offers calm conditions to the residents as they do not feel pushed to be perfect at performing or even not to make errors. The possibility to ensure a protected environment in which residents may perform tasks, detect errors, and correct them without producing adverse consequences, and where instructors may find the opportunity to connect better with their apprentices and techniques, is one of the elements which contributes to effectiveness in simulation. In this context, the possibility of learning form errors minimizes the trouble of dealing with that matter in real practice before the patients. It helps improving performance by experience repetition until attaining the goal. To Maslovitz, simulated training allows thus identifying and correcting common clinical errors made during emergencies. Supervisors understand that clinical simulation provides support to a professional practice that is committed to patient safeness. Evidence shows that obstetricians have improved their technical and communication abilities by practicing. In that sense, programs, which concern patient safety, must incorporate Obstetrics and Gynecology simulation. Sustained and increasing focus on medical error reduction and on patient safeness, as well as the need to offer a safe, ethical, and student-centered training lead to a model, which incorporates Simulation-based Education. The role of simulation has also been described as a scenario for teamwork in which it is emphasized its application in multidisciplinary training as well as in Permanent Education. Training patterns for quick response in obstetrics emergencies are useful to improve team performance and bring better results to patients. A systematic review on simulation-based training evaluation determined that teamwork became more efficient not just due to advancement of scientific knowledge, but also due to improvement in both communication skills and obstetrics emergency management. Simulation-based education proved itself as a Scenario for reflection about OG work process. It is important to highlight that failure to communicate in teamwork contributes to most obstetrics sentinel events. Labor pains and labor itself are critical moments when emergencies occur. The American College of Obstetricians and Gynecologists – ACOG (2014) states that the care provided in emergency cases is enhanced by protocols that have standardized interventions and that promote on-the-job training. Team may learn and practice the necessary interventions while improve efficiency and reduce errors. Within this context, simulation may be used to discuss multidisciplinary protocols of assistance . As an example, a pilot study using simulation identified ∼20 flaws in the safe application of a new intraoperative radiotherapy procedure before testing it out in patients. Such procedure included radiation safety, teamwork, team communication, and problems with both equipment and supply. Thus, simulation was a scenario for the creation and discussion of a patient safeness protocol whenever innovation is brought to clinical environment. Due to the increasing of lawsuits against medical practitioner's performance, supervisors stated that preparation for safer professional practice reduces the risk of taking practitioners to court. According to a report by the General Council of Medicine from São Paulo (2006), professional obstetricians and gynecologists are sixth in the ranking of lawsuits. The main for these concerns the procedures related to labor assistance. Patients are usually awake when unpredictable emergencies that risk their lives occur, which makes teaching more difficult during these moments. Even experienced professionals could be surprised by both unexpected situations and rare complications that may happen during labor assistance. On account of that, medical schools and Medical Residency Programs are encouraged to develop strategies so as to avoid exposing patients to teaching under such conditions when simulation stands out as a training opportunity for students and residents. Simulation highlights and enhances the role of favoring the decision-making process so as to provide the increment of professional attitudes. A study on simulation being applied to evaluate teamwork training in decision-making process via simulation demonstrated a time reduction of 33 to 21 minutes from the indication of cesarean section to the moment of surgical incision. Another role Clinical Simulation performs is that of becoming a scenario for evaluative processes in residency, by expanding the items to be evaluated within the competencies expected from professionals. During the interviews, supervisors mentioned 3 evaluation strategies using simulation in Residency. A scenario for the selective process enrollment appears as a possibility. Clinical simulations allows a better evaluation of candidates as it enables a better observation of their technical abilities, in addition to their professionalism, communication, and critical thought. The second strategy mentioned by the supervisors refers to the possibility of evaluating multiple competencies expected from health professionals during Medical Residency Programs . Simulation was thus mentioned in summative assessment as the internship completion and as the conclusion of a stage in the residency program. Although knowing about the use of simulation in evaluation, just a supervisor mentioned the OSCE model of evaluation as a preparation for the specialist diploma in Obstetrics and Gynecology. The third strategy referred by the supervisors was the possibility of interactive feedback , which provides an immediate and constructive response to the resident. To students, feedback represents a moment of effective learning. In research done with simulation educators, Rall, Manser and Howard (2000) emphasized that debriefing is the most important part of training via simulation. One of interviewees called it “the heart and soul” of simulation-based training. In conclusion, there is unanimity among supervisors as to acknowledge that simulation represents an encouragement to resident participation in Medical Residency Programs activities. They highlight resident improvement in performance while doing their practical activities. Some studies have assessed the efficacy of simulated training in student confidence, examination skills and in communication. In 2015, Smith and collaborators published a systematic review with a data meta-analysis in which a comparison between teaching pelvic examination through simulation and through traditional methods was made. The authors concluded there is an improvement in the student competence concerning pelvic examination performance, as well as in their communication abilities when the simulation method is used. Based on the data found in this study, OG Clinical Simulation in Residency: - Complements the teaching and learning process, allows homogeneity of opportunities, enables less-common procedures training, a deliberate and sustainable practice as well as the resident self-confidence training. - Provides a safe teaching and learning environment. - Encourages trial-and-error learning which enables improvement in performance by repeating the experience. - Favors professional practice committed to patient safeness. - Enhances teamwork as it favors its knowledge and the development of communication abilities. Furthermore, enhances emergency managing performance. - Encourages reflection about work process, which brings the opportunity for discussion on multidisciplinary assistance protocols, prepares for safer professional practice and reduces the risk of law suits. - Favors the decision making process, especially in emergency situations. - Increases evaluative processes in residency, which allows the analysis of the multiple competences expected to be found in health professionals during the Obstetrics and Gynecology Medical Residency Programs. - Favors interactive feedback and the resulting improvement of the resident, as well as of the professor and preceptor. - Encourages participation of residents in the activities of the Obstetrics and Gynecology Medical Residency Programs, resulting in enhancement of practical performance. Clinical Simulation is thus acknowledged as a powerful tool to be used in the residents teaching and learning process. |
Comparative analysis of efficacy and quality of life between totally extraperitoneal sublay and intraperitoneal onlay mesh repair for ventral hernia | 3d42a35e-addb-45ec-8667-680ec51713be | 11762730 | Laparoscopy[mh] | Ventral hernias are a prevalent surgical challenge , characterized by the protrusion of intra-abdominal contents through a defect in the abdominal wall. They can lead to significant morbidity if left untreated, with patients often experiencing pain, discomfort, and impaired quality of life. The increasing incidence of ventral hernias has prompted the development and refinement of various surgical techniques aimed at improving patient outcomes and minimizing complications. Traditionally, open repair methods were widely used; however, the advent of minimally invasive techniques, such as Laparoscopic Intraperitoneal Onlay Mesh (IPOM) repair, has shifted the paradigm in hernia surgery . Minimally invasive approach is considered a safe and effective method for Ventral hernias , . IPOM has gained popularity due to its reduced postoperative pain and shorter recovery time. Nonetheless, concerns remain regarding the potential for intraperitoneal adhesions and complications related to mesh placement within the peritoneal cavity. Recent research shows patients who have undergone incisional hernia repair using IPOM face a heightened risk of bowel obstruction compared to those with a similar surgical history yet without incisional hernia repair (IHR) . Study evaluating the use of meshes in animal models reveal that they can cause adhesions in the intraperitoneal location, even when composite meshes are employed . The Totally Extraperitoneal Sublay Repair (TES) technique has emerged as an alternative approach, offering the theoretical advantage of placing the mesh in the extraperitoneal space, thereby reducing the risk of intraperitoneal complications. TES, however, is associated with a longer operative duration, raising questions about its overall efficiency compared to IPOM. This study aims to objectively compare the efficacy, safety, and socio-economic impact of TES and IPOM in the repair of small to medium-sized ventral hernias. Specifically, it focuses on assessing postoperative quality of life and patient satisfaction, two critical metrics that directly influence the long-term success of hernia repair procedures . By analyzing these outcomes, this study seeks to provide evidence-based insights that may guide surgeons in selecting the most appropriate technique for their patients.The importance of this research lies in its potential to inform clinical decision-making, optimize patient outcomes, and contribute to the ongoing evolution of ventral hernia repair methodologies. As the field continues to advance, understanding the comparative benefits and limitations of TES and IPOM is essential for enhancing patient care and improving surgical practice.
Study population This retrospective cohort study was conducted at Shaoxing Central Hospital, involving patients who underwent ventral hernia repair. Ventral hernias encompass all hernias occurring in the anterior abdominal wall, including epigastric, umbilical, and linea alba hernias. The sample size of 125 patients was determined based on the availability of eligible cases from May 2018 to November 2023. The study included all patients meeting the inclusion criteria to minimize selection bias and ensure representative analysis. Patients were divided into two groups based on the surgical technique employed: Totally Extraperitoneal Sublay Repair (TES) and Laparoscopic Intraperitoneal Onlay Mesh (IPOM) repair. The TES group consisted of 55 patients, while the IPOM group comprised 70 patients. Patients included in the study were those diagnosed with small to medium-sized ventral hernias primarily included cases of epigastric and umbilical hernias, defined as defects with ventral hernias ≤ 7 cm .The diameter of the hernia defect was primarily measured using preoperative imaging, with a preference for CT scans, which provided the most accurate and consistent measurements. In some cases, ultrasound was also used to measure smaller defects. Exclusion criteria included patients with large hernias (greater than 7 cm), recurrent hernias, severe obesity or previous extensive abdominal surgeries, or those with significant comorbidities that could impact surgical outcomes. The HERQules tool was used for evaluating the patients’ postoperative QoL. It was applied during outpatient visits at 1 month, 3 months, and 6 months following surgery to track improvements in QoL. Key technical points Several key technical considerations were observed during both TES and IPOM procedures to optimize surgical outcomes. For TES, the careful dissection of the preperitoneal space was crucial to minimize the risk of injury to surrounding structures, such as the bladder and epigastric vessels. Ensuring adequate overlap of the mesh was essential to reduce the likelihood of hernia recurrence. The choice between tacks and sutures for mesh fixation in TES was based on surgeon preference and patient-specific factors, with the primary goal being secure mesh placement without excessive tension. In the IPOM procedure, special attention was paid to the positioning of the mesh within the peritoneal cavity to prevent migration or folding, which could lead to recurrence or other complications. The use of composite mesh, with one side designed to minimize adhesions, was critical in reducing the risk of intraperitoneal complications. Transfascial sutures provided additional security for mesh fixation, particularly in larger defects or in patients with higher intra-abdominal pressure. The IOPM procedure has been thoroughly detailed in previous literature , , while the surgical process of TES is demonstrated in the materials. An example of an umbilical hernia can be seen in video and Fig. . Postoperative care, follow-up, and data collection All patients received standardized postoperative care, which included pain management, early mobilization, and discharge planning based on recovery progress. Postoperative pain was assessed using the Visual Analog Scale (VAS), and patients were monitored for complications such as seroma, infection, or mesh-related issues. Follow-up assessments were conducted at 3 and 6 months post-surgery through either outpatient visits or telephone interviews. These assessments focused on evaluating the patients’ quality of life, satisfaction with the surgical outcome, and any long-term complications, such as recurrence or chronic pain. Baseline characteristics were assessed for all patients included in the study. These characteristics included age, gender, body mass index (BMI), preoperative pain levels, and other relevant factors that could impact postoperative outcomes. The two groups, TES and IPOM, were compared for these characteristics to ensure their comparability at the time of inclusion. Data collection encompassed operation time, postoperative complications, recovery status, quality of life (assessed using the Hernia-Related Quality of Life Survey), and patient satisfaction questionnaires. Long-term outcomes were primarily concerned with recurrence rates and chronic complications. The HerQLes questionnaire, designed to evaluate the impact of hernia surgery on quality of life, assessed pain, daily activity, psychological state, socioeconomic burden, and overall quality of life. Scores for each item ranged from 1 to 5, with higher scores indicating greater issues and a reduced quality of life. The patient satisfaction questionnaire evaluated pain management, the recovery process, medical services, and overall satisfaction, also using a 1 to 5 point scale, with higher scores reflecting greater satisfaction. Statistical analysis To compare the outcomes between the TES and IPOM groups with similar general characteristics, propensity score matching (PSM) was employed to to minimize confounding between the two groups. PSM estimates the likelihood of group assignment based on baseline variables (e.g., age, BMI, gender) and matches participants with similar scores, ensuring comparability between groups. After matching, statistical tests were selected based on data characteristics. Student’s t-test is used to compare the means of continuous variables (such as age and BMI) between two groups when the data are normally distributed. Mann-Whitney U test is used to compare continuous variables between two groups when the data are not normally distributed. Chi-square test or Fisher’s exact test are used to compare categorical variables (such as gender) between two groups. Fisher’s exact test is typically used when the sample size is small or the expected frequencies are low. A P-value of less than 0.05 was considered statistically significant. All analyses were performed using SPSS software (version 26.0; IBM Corp., Armonk, NY, USA), ensuring robust and reliable results. This comprehensive methodology accounted for baseline differences and ensured valid comparisons of outcomes between the TES and IPOM groups.
This retrospective cohort study was conducted at Shaoxing Central Hospital, involving patients who underwent ventral hernia repair. Ventral hernias encompass all hernias occurring in the anterior abdominal wall, including epigastric, umbilical, and linea alba hernias. The sample size of 125 patients was determined based on the availability of eligible cases from May 2018 to November 2023. The study included all patients meeting the inclusion criteria to minimize selection bias and ensure representative analysis. Patients were divided into two groups based on the surgical technique employed: Totally Extraperitoneal Sublay Repair (TES) and Laparoscopic Intraperitoneal Onlay Mesh (IPOM) repair. The TES group consisted of 55 patients, while the IPOM group comprised 70 patients. Patients included in the study were those diagnosed with small to medium-sized ventral hernias primarily included cases of epigastric and umbilical hernias, defined as defects with ventral hernias ≤ 7 cm .The diameter of the hernia defect was primarily measured using preoperative imaging, with a preference for CT scans, which provided the most accurate and consistent measurements. In some cases, ultrasound was also used to measure smaller defects. Exclusion criteria included patients with large hernias (greater than 7 cm), recurrent hernias, severe obesity or previous extensive abdominal surgeries, or those with significant comorbidities that could impact surgical outcomes. The HERQules tool was used for evaluating the patients’ postoperative QoL. It was applied during outpatient visits at 1 month, 3 months, and 6 months following surgery to track improvements in QoL.
Several key technical considerations were observed during both TES and IPOM procedures to optimize surgical outcomes. For TES, the careful dissection of the preperitoneal space was crucial to minimize the risk of injury to surrounding structures, such as the bladder and epigastric vessels. Ensuring adequate overlap of the mesh was essential to reduce the likelihood of hernia recurrence. The choice between tacks and sutures for mesh fixation in TES was based on surgeon preference and patient-specific factors, with the primary goal being secure mesh placement without excessive tension. In the IPOM procedure, special attention was paid to the positioning of the mesh within the peritoneal cavity to prevent migration or folding, which could lead to recurrence or other complications. The use of composite mesh, with one side designed to minimize adhesions, was critical in reducing the risk of intraperitoneal complications. Transfascial sutures provided additional security for mesh fixation, particularly in larger defects or in patients with higher intra-abdominal pressure. The IOPM procedure has been thoroughly detailed in previous literature , , while the surgical process of TES is demonstrated in the materials. An example of an umbilical hernia can be seen in video and Fig. .
All patients received standardized postoperative care, which included pain management, early mobilization, and discharge planning based on recovery progress. Postoperative pain was assessed using the Visual Analog Scale (VAS), and patients were monitored for complications such as seroma, infection, or mesh-related issues. Follow-up assessments were conducted at 3 and 6 months post-surgery through either outpatient visits or telephone interviews. These assessments focused on evaluating the patients’ quality of life, satisfaction with the surgical outcome, and any long-term complications, such as recurrence or chronic pain. Baseline characteristics were assessed for all patients included in the study. These characteristics included age, gender, body mass index (BMI), preoperative pain levels, and other relevant factors that could impact postoperative outcomes. The two groups, TES and IPOM, were compared for these characteristics to ensure their comparability at the time of inclusion. Data collection encompassed operation time, postoperative complications, recovery status, quality of life (assessed using the Hernia-Related Quality of Life Survey), and patient satisfaction questionnaires. Long-term outcomes were primarily concerned with recurrence rates and chronic complications. The HerQLes questionnaire, designed to evaluate the impact of hernia surgery on quality of life, assessed pain, daily activity, psychological state, socioeconomic burden, and overall quality of life. Scores for each item ranged from 1 to 5, with higher scores indicating greater issues and a reduced quality of life. The patient satisfaction questionnaire evaluated pain management, the recovery process, medical services, and overall satisfaction, also using a 1 to 5 point scale, with higher scores reflecting greater satisfaction.
To compare the outcomes between the TES and IPOM groups with similar general characteristics, propensity score matching (PSM) was employed to to minimize confounding between the two groups. PSM estimates the likelihood of group assignment based on baseline variables (e.g., age, BMI, gender) and matches participants with similar scores, ensuring comparability between groups. After matching, statistical tests were selected based on data characteristics. Student’s t-test is used to compare the means of continuous variables (such as age and BMI) between two groups when the data are normally distributed. Mann-Whitney U test is used to compare continuous variables between two groups when the data are not normally distributed. Chi-square test or Fisher’s exact test are used to compare categorical variables (such as gender) between two groups. Fisher’s exact test is typically used when the sample size is small or the expected frequencies are low. A P-value of less than 0.05 was considered statistically significant. All analyses were performed using SPSS software (version 26.0; IBM Corp., Armonk, NY, USA), ensuring robust and reliable results. This comprehensive methodology accounted for baseline differences and ensured valid comparisons of outcomes between the TES and IPOM groups.
As presented in Table , significant differences were observed between the two groups in terms of the longest diameter of defect and BMI (all P < 0.05). However, after 1:1 PSM, the general characteristics became comparable between the TES and IPOM groups. The TES group had a longer operative time but lower costs, less postoperative pain, reduced drainage volume, a lower complication rate, and significantly better quality of life improvements compared to the IPOM group. The operative and perioperative parameters of the TES and IPOM groups are summarized in Table . The mean operative time was significantly longer in the TES group (204.45 ± 37.35 min) compared to the IPOM group (167.68 ± 35.96 min, P < 0.001), as shown in Fig. a. The TES group demonstrated significantly lower medical expenses (1.51 ± 0.25 × 10,000 RMB) than the IPOM group (2.85 ± 0.45 × 10,000 RMB, P < 0.001, Fig. b). The TES group experienced significantly less postoperative pain than the IPOM group on Day 1 (3.22 ± 0.98 vs. 4.35 ± 1.12, P < 0.001) and Day 2 (2.65 ± 0.82 vs. 3.55 ± 0.94, P < 0.001), as shown in Fig. c and d. The TES group had lower drainage volumes (136.54 ± 55.68 ml) compared to the IPOM group (208.58 ± 78.86 ml, P < 0.001, Fig. e). The TES group had a lower complication rate (14.3%, 5/35) than the IPOM group (34.3%, 12/35), but the difference was not statistically significant ( P = 0.051, Fig. f). Among complications, the IPOM group exhibited higher rates of symptomatic seroma (2 vs. 1 case), postoperative ileus (5 vs. 1 case), and chronic pain (5 vs. 2 cases). Chronic pain was defined as pain for more days than not over the last three months post-surgery . Postoperative ileus was characterized by symptoms such as abdominal pain, vomiting, and abdominal distension occurring within two weeks after surgery. Diagnosis was confirmed through abdominal imaging, with mechanical causes like intestinal torsion and other factors such as hypokalemia and tumor metastasis being excluded. The quality of life (QOL) outcomes, assessed using the HerQLes questionnaire, are presented in Table . Key differences between the TES and IPOM groups over time are highlighted. The TES group reported significantly lower pain scores at 30 days (3.35 ± 0.49 vs. 3.75 ± 0.63, P = 0.004) and 60 days (3.04 ± 0.65 vs. 3.33 ± 0.51, P = 0.042), with no significant difference at 180 days ( P = 0.755, Fig. a). The TES group showed less movement limitation at 30 days (2.75 ± 0.41 vs. 3.05 ± 0.59, P = 0.016), with differences diminishing at 60 days ( P = 0.665) and 180 days ( P = 0.168, Fig. b). Differences in psychological state were significant at 30 days (2.54 ± 0.41 vs. 2.75 ± 0.46, P = 0.048) but not at 60 ( P = 0.108) or 180 days ( P = 0.077, Fig. c). The TES group demonstrated better daily activity scores at 30 days (3.10 ± 0.55 vs. 3.40 ± 0.65, P = 0.041) and 60 days (2.86 ± 0.46 vs. 3.09 ± 0.48, P = 0.045), with no significant difference at 180 days ( P = 0.233, Fig. d). Overall Quality of Life: The TES group performed better at 180 days (2.48 ± 0.44 vs. 2.71 ± 0.45, P = 0.034), despite no significant differences at earlier timepoints (30 days P = 0.211, 60 days P = 0.512, Fig. e). Follow-ups were conducted monthly via outpatient visits or telephone for the first six months, and then every three to six months thereafter. No recurrences were observed in any patients by the final follow-up. Satisfaction scores also indicated that patients in the TES group experienced better pain management and reported higher overall satisfaction postoperatively compared to the IPOM group, as shown in Table .
The management of ventral hernias remains a critical focus in abdominal surgery due to the high incidence of associated postoperative complications and the imperative for repair strategies that ensure long-term recurrence prevention . A study revealed that the prevalence of abdominal wall hernias in the general Russian population was 20.9% . The rising prevalence of ventral hernias, alongside advancements in surgical techniques, necessitates a comprehensive understanding of the available surgical approaches . Ventral hernia repair has evolved significantly over the past few decades. Traditional open repair methods, while effective, are associated with high complication morbidity, including increased pain, longer recovery periods, and higher recurrence rates. The advent of minimally invasive techniques such as IPOM has shifted the paradigm, offering reduced postoperative pain, shorter hospital stays, and quicker return to normal activities , . However, IPOM is not without its drawbacks, particularly concerning the risk of intraperitoneal adhesions, mesh-related complications, and chronic pain . A 2022 systematic review concluded that the incidence of bowel obstruction in IPOM was higher than Sublay . Spiral tacks used for intraperitoneal mesh fixation is the leading cause of adhesions and bowel lesions . TES, an alternative to IPOM, places the mesh in the extraperitoneal space, theoretically reducing the risk of intraperitoneal complications while potentially enhancing patient outcomes. The primary objective of this study was to evaluate and compare the efficacy, safety, postoperative quality of life, patient satisfaction, and socio-economic impact of TES and IPOM in the repair of small-to-medium ventral hernias. By analyzing these outcomes, the study sought to provide evidence-based insights that could guide surgeons in selecting the most appropriate technique for their patients, ultimately improving clinical decision-making. Ventral hernias, if left untreated or improperly managed, can lead to severe complications, including bowel obstruction, strangulation, and chronic pain, significantly impairing a patient’s quality of life. As surgical techniques continue to evolve, it is crucial to determine which methods offer the best balance between safety, efficacy, and patient-centered outcomes.This research is particularly important because it addresses the need for a clearer understanding of how these two techniques perform in real-world clinical settings. The results of this study provide valuable data on the postoperative experiences of patients, including pain management, recovery time, and overall satisfaction with the surgical outcome. These factors are critical not only for immediate postoperative recovery but also for long-term patient well-being and the prevention of hernia recurrence. Moreover, the socio-economic impact of these procedures is a significant consideration. With healthcare costs rising globally, the ability to reduce hospitalization costs, minimize postoperative complications, and enhance recovery times is increasingly important. By demonstrating that TES may offer advantages in terms of cost-effectiveness and patient satisfaction, this study contributes to the ongoing discussion about how to optimize resource allocation in healthcare settings while maintaining high standards of patient care. The results of this study indicate that while both TES and IPOM are effective in repairing small-to-medium ventral hernias, they offer distinct advantages and drawbacks. TES, despite its longer operative duration, was associated with significantly reduced postoperative pain, lower hospitalization costs, and diminished postoperative drainage. Our result is generally consistent with the previous research results of Li et al. These findings suggest that TES may be a more cost-effective option, particularly for patients where postoperative pain management and cost are major concerns. In contrast, IPOM, showed a slightly higher complication rate in this study, particularly concerning postoperative ileus and chronic pain. These complications, while not statistically significant in this study, highlight the need for careful patient selection and technique refinement when opting for IPOM. The quality of life assessments further support the advantages of TES, with patients in the TES group reporting higher satisfaction scores and better overall quality of life at both 3 and 6 months postoperatively. The results of our study are similar to other reports in the literature . This suggests that TES may provide more durable improvements in patient outcomes, potentially leading to fewer long-term complications and a lower likelihood of recurrence. Furthermore, our analysis identified several factors that significantly influence postoperative QoL in patients undergoing ventral hernia repair. Patient-related factors such as age, body mass index (BMI), and preoperative pain levels were found to be significantly correlated with QoL outcomes. For instance, older patients or those with higher preoperative pain levels tended to report lower QoL scores after surgery. Surgical factors, including operative time and intraoperative complications, also had a substantial impact. Longer operative times and the occurrence of intraoperative complications were associated with poorer postoperative QoL. Additionally, complications during the postoperative period, including, seromas, and chronic pain, were found to negatively affect QoL. These findings underscore the importance of careful patient selection and surgical technique optimization in enhancing postoperative outcomes. Our future work will explore these factors in greater detail to better understand their contribution to QoL changes. The findings of this study have several important clinical implications. First, it underscores the choice of surgical technique is tailored to the patient’s specific circumstances, including their health status, hernia characteristics, and personal preferences. Surgeons should consider the potential benefits of TES in terms of postoperative pain reduction and cost-effectiveness, particularly for patients who are at higher risk for chronic pain or who may benefit from a more cost-efficient approach. Second, the study underscores the importance of optimizing ventral hernia repair techniques. As minimally invasive methods continue to evolve, it is crucial to refine these techniques to minimize complications and further enhance patient outcomes. Future research should prioritize the long-term follow-up of patients who undergo TES and IPOM, with particular attention to recurrence rates, chronic pain management, and overall quality of life. Finally, this study suggests that there is still room for improvement in both TES and IPOM. For TES, reducing operative time without compromising the benefits observed in this study could enhance its appeal as a preferred technique. For IPOM, addressing the risks associated with intraperitoneal mesh placement, such as adhesions and chronic pain, will be critical for its continued use in ventral hernia repair . Our findings align with previous studies that have demonstrated the efficacy of TES in reducing postoperative complications and improving recovery time, albeit with longer operative durations compared to IPOM . Similarly, while IPOM has been widely adopted due to its safey , its association with long-term mesh-related complications cannot be ignored. This study acknowledges several limitations that warrant consideration. First of all, the limited sample size in this study constrains the generalizability and statistical power of our findings. Future research with larger cohorts is necessary to validate these results across diverse populations. Secondly, as a retrospective study, there is a potential for selection bias. However, the PSM analysis was conducted to mitigate discrepancies in general characteristics between the two groups. This adjustment allows for a more accurate comparison of the efficacy and quality of life outcomes between the two surgical approaches, minimizing the potential for confounding factors to influence the results. While this mitigates confounding factors to some extent, randomized controlled trials remain the gold standard for eliminating such biases. Furthermore, the follow-up period was inadequate for a comprehensive evaluation of long-term outcomes, including recurrence rates. This limitation underscores the need for extended follow-up in future studies to capture a more comprehensive picture of outcomes, particularly concerning different hernia types and sizes. Addressing these limitations in subsequent research will enhance the reliability of findings and provide more robust clinical guidance. In conclusion, both TES and IPOM are effective options for small-to-medium ventral hernia repair, each with unique advantages. TES, characterized by lower postoperative pain, reduced hospitalization costs, and greater patient satisfaction, may present a more cost-effective and patient-centered option. The selection between TES and IPOM should be guided by a comprehensive assessment of the patient’s specific needs, the surgeon’s expertise, and the hernia’s particular characteristics. This study contributes valuable insights to clinical practice, emphasizing the need for individualized treatment plans. The findings support the growing preference for TES in suitable cases. Future research should focus on long-term outcomes, cost-effectiveness, and quality of life assessments in larger, more diverse patient populations. Additionally, studies exploring hybrid approaches combining both techniques could further optimize patient outcomes, offering more personalized care pathways for hernia repair.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
A comprehensive scan of psychological disciplines through self-identification on Google Scholar: Relative endorsement, topical coverage, and publication patterns | d3660bcd-9e4a-43a8-8aff-4a8e69d14fa5 | 10760704 | Physiology[mh] | Soon after the emergence of psychology as an independent field of research in the second half of the 19 th century, different disciplines have emerged within it . Scientists have divided themselves further into more circumscribed disciplines, such as developmental and social psychology. These divisions have often been criticized because they reduce cross-talk between researchers and lead to isolated research efforts that would benefit from a multidisciplinary perspective . For example, one risk is that members of segregated disciplines are no longer informed about the topics of other disciplines . Also, without much direct contact, it becomes difficult to compare disciplines, let alone integrate their respective insights. The current research set out to conduct a systematic scan of Google Scholar (GS) profiles. Although this methodology is less suited to inform conclusions about disciplinary formation or the social organization of scientists, GS profiles do offer unique insights into the interface between individual researchers and their disciplinary identification. By investigating correlates of these disciplinary identifications, tentative conclusions might be drawn about the commonalities and uniquenesses of major psychological disciplines. We did so by investigating three basic research questions, comparing psychological disciplines in 1) their relative endorsement across time and world regions, 2) their topical coverage, and 3) publication patterns. Classification of psychological disciplines Various systems exist to classify psychological disciplines, using, for example, empirical or rational arguments . Some approaches have focused on fundamental theoretical paradigms and identified “schools” within psychology. For example, Robins et al. identified four schools: Psychoanalysis, behavioral psychology, cognitive psychology, and neuroscience. In the Netherlands, Duijker introduced five basic disciplines: experimental psychology, methods and statistics, developmental psychology, personality psychology, and social psychology. In Germany, the German Psychological Society has declared several fundamental disciplines as part of its curriculum framework, overlapping with Duijker’s but also including biological psychology and more applied disciplines such as psychological assessment, clinical psychology, educational psychology, and industrial/organizational psychology . Finally, the widely used Web of Science database currently distinguishes 10 psychological disciplines, which are comparable with the German classification, with the exception that psychoanalysis is included in Web of Science, whereas personality psychology is not. For the present study, we adopted the Web of Science classification because it is often used in bibliometric research (e.g., ). However, we decided to modify it in a number of ways. First, we added personality psychology because it is considered a core psychological domain in many countries (e.g., in the Netherlands and Germany). Second, we did not adopt the Web of Science domain “applied psychology” as a discipline because a) we deemed it too heterogeneous to be useful (i.e., there are many ways to “apply” psychology, for example, in forensic settings, work and organizations settings, etc.) and b) there were too few psychologists (< 100) who endorsed this label in GS. Third, we added cognitive neuroscience as a separate field because of a) its increasing prominence and b) its inclusion in Web of Science as a separate interdisciplinary category of “neurosciences”. Fourth, we did not include the category “multidisciplinary psychology” because a) we intended to create this category empirically (see below) and b) this is not a common term psychologists self-identify with. In total, we ended up with 10 disciplines: psychoanalysis, clinical psychology, cognitive neuroscience, developmental psychology, educational psychology, experimental psychology, biological psychology, mathematical psychology, social psychology, and personality psychology. We used this categorization of 10 psychological disciplines to address three research questions, which we outline in the following sections. Research questions Relative endorsement and international representation of disciplines Our first research question pertains to the relative endorsement of psychological disciplines across time and countries. The relative endorsement of psychological disciplines is defined as the relative percentage of psychological scientists that identify with a particular psychological discipline (as expressed in GS profiles). Robins, Gosling, and Craik studied changes in four schools in psychology: psychoanalytic, behavioral, cognitive, and neuroscience. They observed that the predominance of cognitive perspectives increased sharply during the 1970s, as measured by increasing use of corresponding keywords in articles and dissertations as well as a relative increase in the number of citations. Their analysis also showed a decline in prominence of psychoanalysis, and an unexpected lack of increase in prominence of the (then still rather nascent) neuroscience school. Note that the “schools” as studied by Robins et al. represent paradigms that can be theoretically applied to many (if not all) disciplines. For example, the cognitive paradigm can be applied to developmental psychology, social psychology, education psychology, and so forth. In contrast, disciplines additionally have organizational features, such as its own journals, conferences, and scientific associations, and oftentimes they are also reflected in institutional structures, such as the formation of separate units (e.g., departments). The current study investigates disciplines in this latter regard, although we acknowledge that it can sometimes be difficult to separate disciplines from paradigms (e.g., in the case of “experimental psychology”). Another focus of our analysis is on the relative prominence of Anglo-Saxon countries like the US in psychological research. There have been frequent criticisms of the overreliance of Western samples in psychological research , but it is equally important that there is diversity in terms of authors’ cultural and ethnic background–not just within any diverse country (e.g., the US, Brazil, South Africa, etc.), but also between countries. Comparing the share of US contributions over time, a relative decrease over time has been reported both for the period between 1975 to 1994 and between 1996 to 2010 . To the best of our knowledge, however, no systematic analysis has focused on relative differences between world regions in the relative identification with psychological disciplines. It has been argued that cultural and economic background is relevant for the relative endorsement of certain scientific paradigms . For example, it has been argued that the drive model underlying psychoanalysis is a typically individualistic model that does not sufficiently take relational considerations into account . Whether there are indeed systematic differences between countries in their researchers’ self-identification with psychological disciplines is still an open question, however. Topical coverage of psychological disciplines Our second research question pertains to differences between psychological disciplines in the kinds of topics that are specific to each discipline. We did not have firm predictions. Some domain-specific topics seemed obvious, for example, that mathematical psychology would be focused most on statistical techniques (e.g., multilevel modeling, structural equation modeling, etc.). Other than that, linkages of topics to disciplines is a complex endeavor and often depends more on established traditions than on logical classifications. For example, “attention” or “perception” are common topics for experimental disciplines, such as experimental psychology and neuroscience, but they could theoretically be also investigated in other disciplines (e.g., development of perception in children; attention bias in clinical psychology). Because of this, we did not derive firm predictions regarding the topics that would emerge as discipline-specific. For the same reason, we did not have good reasons to expect certain topics to be more multidisciplinary than others. The only exception was the discipline of personality psychology, which has been identified (Yang and Chiu actually based their conclusion on the hub position of the Journal of Personality and Social Psychology , which is (by its name) a mixed journal. However, their subsequent interpretation relied most directly on personality psychology as a unifying discipline that studies the “whole person”.) by Yang and Chiu as a so-called “hub science”–a discipline that produces knowledge that is widely used by other disciplines. It might thus be expected that topics that are often studied in personality psychology might have a greater multidisciplinary appeal. Of note, however, Yang and Chiu used citations patterns related to APA flagship journals of different psychological disciplines, whereas we used citation patterns linked to individual researchers self-identifying with such disciplines, so it is unclear whether their results would generalize to the current study. Finally, we investigated whether the relative emphasis by researchers on certain research topics would resemble established meta-distinctions between psychological traditions. Cronbach has already remarked that psychology can be divided by approaches that use a correlational methodology versus approaches that use an experimental methodology. In correlational psychology, differences in people’s everyday behavior are investigated, oftentimes using survey methodology or observations. In experimental psychology, general processes are studied in controlled laboratory settings, oftentimes using reaction times and physiological signals as indicators (for an overview of distinctive features, see ). This broader distinction was recently validated in an empirical analysis of words appearing in abstracts of Dutch psychological articles: Clear “continents” (i.e., spatially clustered groups) of correlational versus experimental terms emerged . We thus expected that we would also find evidence for such distinctions in an analysis of research topics as endorsed in international GS profiles. Publication patterns of psychological disciplines Our third research question pertains to differences between disciplines in productivity and citation impact. Comparing the impact of different psychological disciplines can be useful if institutions must decide which psychological discipline to invest in or to devise strategies to achieve the most impact. Also, when comparing researchers from different psychological disciplines, it is important to know the average benchmark of these researchers’ disciplines to compare their relative performances. There are various sources of impact differences between disciplines. In the following, we discuss two of them: the centrality of the discipline and the robustness of its findings, though others might also apply. Regarding the former, when a discipline is a hub science, it receives citations from many other disciplines and accumulates more impact than disciplines that are more at the periphery of the discipline citation network. As stated above, Yang and Chiu identified personality psychology as a hub science, which might translate to more impact for scientists who identify with that discipline (see also ). Regarding the latter, disciplines might differ in the replicability of their findings. In a widely cited analysis , for example, cognitive psychology studies were found to be on average across several effects more frequently replicable than social psychology studies . In addition, findings from personality psychology have recently been identified as especially likely to replicate . Methodological issues in comparing disciplines There are multiple ways to compare the impact of disciplines, each with their advantages and disadvantages: Using journal impact metrics, attending to institutional or organizational structures, or crawling publicly available author profiles. In the following, we will compare these different approaches. One way is to look at the average (or median) impact factors of journals that are associated with a discipline. For example, the Journal Citation Report [JCR; ] identifies 9 separate psychological categories and provides information about the average impact factor within these categories. For psychology, journal impact in the neurosciences is thus determined to be highest, and impact in psychoanalysis the lowest. Besides being frequently criticized as a problematic indicator of scholarly quality (e.g., ), a disadvantage of relying on the impact factor is that journal classification systems are typically domain-general, so their disciplinary divisions can appear somewhat haphazard. For example, JCR derives most categories from subject matter (e.g., social psychology, developmental psychology) but also includes some categories based on methodology (experimental psychology) or theoretical approach (psychoanalysis). Furthermore, a representative scholar of certain disciplines might publish in cross-domain outlets (e.g., Psychological Review or PNAS ) or in outlets of other disciplines (e.g., a personality psychologist publishing in Educational Psychology ). This hampers a comparison between disciplines if journal classifications are used. Impact across disciplines can also be compared by establishing individual indices from representative groups of researchers, such as within a certain university or scientific associations. For example, representatives of disciplines might be identified by means of their faculty affiliation (e.g., Department of Clinical Psychology). A potential problem, however, is that many universities do not have organizational structures that mirror the disciplinary organization of psychology. For example, many universities do not have a department of educational or personality psychology, although these are clearly recognizable sub-disciplines. Representatives of certain disciplines might also be drawn from lists of editorial boards of prominent disciplinary journals, or from the boards of learned associations. This has the disadvantage that only relatively prominent researchers are sampled, which would not allow for a fair comparison across different career stages. A third approach, which was adopted in the current paper, is to use researchers’ (self-)identified disciplines on a publicly available bibliometric search engine, such as Google Scholar (GS). This bibliometric resource was launched in 2004 and is free, popular, and widely used by psychological scientists today–and is therefore often preferred because it also captures non-journal publication outlets that are relevant for some disciplines but not others, such as conference proceedings . Since 2011, it is possible for researchers to create a profile that lists their contribution, and also list “areas of interest”, which are typically used to specify the researcher’s sub-discipline and/or topics of interest. Using self-identified sub-disciplines in GS has a number of key advantages. For example, it allows for researchers to describe up to five research topics in their own words, thus minimizing artificial or otherwise biased categorizations. The fact that multiple topics are possible also allows researchers to identify with more than one discipline. The GS scholar database also assigns a unique ID to each researcher, thus allowing longitudinal analysis of productivity and citation patterns. Finally, scholars of all career stages and backgrounds can create profiles on the platform, which increases the diversity of the overall pool of researchers that might endorse one of the targeted psychological disciplines. That said, the use of GS also has a number of important limitations, such as the fact that not all researchers have GS accounts, not all GS accounts specify one or more research topics, and not all specified topics can easily be assigned to one of the selected psychological sub-disciplines. We aimed to partially address these limitations through some exploratory analyses and will revisit them in the Discussion section. The current study The current paper used researchers’ self-endorsed identifications with psychological disciplines as a starting point of a comprehensive scan of public profiles of psychological scientists. We used GS to identify researchers by means of labels related to 10 major psychological disciplines (psychoanalysis; clinical psychology; (cognitive) neuroscience; developmental psychology; educational psychology; experimental psychology; biological psychology/psychophysiology; mathematical psychology/psychometrics; social psychology; personality psychology). The researchers’ profile and citation data were then used to address three research questions. For Research Question 1 , we investigated distributions of self-endorsed disciplines so that we could empirically establish the relative frequency of researcher profiles with a multidisciplinary background as well as changes in prominence of the different disciplines over time. For Research Question 2 , we looked at self-endorsed labels of all profiles to identify topics that are characteristic for certain (groups of) disciplines, as well as topics that are highly cross-disciplinary. For Research Question 3 , we compared the impact of psychological disciplines, both in terms of average productivity per year as well as cumulative citation impact. To address these questions, we used the GS profiles to create average findings for the 10 psychological disciplines across hundreds of scholars each. Our approach has several features that set it apart from other literature. First, we took a broad approach focusing on all psychological disciplines but also went into depth regarding one discipline that is often left out of analyses: personality psychology. Furthermore, we used Google Scholar to compare disciplines, which has not been done before but has several advantages. For example, it allowed us to flesh out the topics that each discipline tackles and also to identify topics that are covered by multiple disciplines. Our findings can thus give rise to more constructive suggestions for topics that have the most potential for interdisciplinary collaboration. Furthermore, our method allows for the identification of linkages with the individual scholar as unit of analysis, so novel links can emerge (e.g., if topics covary in scholarly interest profiles but are typically investigated in separate papers). Finally, our analysis covers a broader timespan that has featured many important developments, for example, the increased globalization of academic scholarship. Our method also has important limitations, however, which we cover in the Discussion section. Various systems exist to classify psychological disciplines, using, for example, empirical or rational arguments . Some approaches have focused on fundamental theoretical paradigms and identified “schools” within psychology. For example, Robins et al. identified four schools: Psychoanalysis, behavioral psychology, cognitive psychology, and neuroscience. In the Netherlands, Duijker introduced five basic disciplines: experimental psychology, methods and statistics, developmental psychology, personality psychology, and social psychology. In Germany, the German Psychological Society has declared several fundamental disciplines as part of its curriculum framework, overlapping with Duijker’s but also including biological psychology and more applied disciplines such as psychological assessment, clinical psychology, educational psychology, and industrial/organizational psychology . Finally, the widely used Web of Science database currently distinguishes 10 psychological disciplines, which are comparable with the German classification, with the exception that psychoanalysis is included in Web of Science, whereas personality psychology is not. For the present study, we adopted the Web of Science classification because it is often used in bibliometric research (e.g., ). However, we decided to modify it in a number of ways. First, we added personality psychology because it is considered a core psychological domain in many countries (e.g., in the Netherlands and Germany). Second, we did not adopt the Web of Science domain “applied psychology” as a discipline because a) we deemed it too heterogeneous to be useful (i.e., there are many ways to “apply” psychology, for example, in forensic settings, work and organizations settings, etc.) and b) there were too few psychologists (< 100) who endorsed this label in GS. Third, we added cognitive neuroscience as a separate field because of a) its increasing prominence and b) its inclusion in Web of Science as a separate interdisciplinary category of “neurosciences”. Fourth, we did not include the category “multidisciplinary psychology” because a) we intended to create this category empirically (see below) and b) this is not a common term psychologists self-identify with. In total, we ended up with 10 disciplines: psychoanalysis, clinical psychology, cognitive neuroscience, developmental psychology, educational psychology, experimental psychology, biological psychology, mathematical psychology, social psychology, and personality psychology. We used this categorization of 10 psychological disciplines to address three research questions, which we outline in the following sections. Relative endorsement and international representation of disciplines Our first research question pertains to the relative endorsement of psychological disciplines across time and countries. The relative endorsement of psychological disciplines is defined as the relative percentage of psychological scientists that identify with a particular psychological discipline (as expressed in GS profiles). Robins, Gosling, and Craik studied changes in four schools in psychology: psychoanalytic, behavioral, cognitive, and neuroscience. They observed that the predominance of cognitive perspectives increased sharply during the 1970s, as measured by increasing use of corresponding keywords in articles and dissertations as well as a relative increase in the number of citations. Their analysis also showed a decline in prominence of psychoanalysis, and an unexpected lack of increase in prominence of the (then still rather nascent) neuroscience school. Note that the “schools” as studied by Robins et al. represent paradigms that can be theoretically applied to many (if not all) disciplines. For example, the cognitive paradigm can be applied to developmental psychology, social psychology, education psychology, and so forth. In contrast, disciplines additionally have organizational features, such as its own journals, conferences, and scientific associations, and oftentimes they are also reflected in institutional structures, such as the formation of separate units (e.g., departments). The current study investigates disciplines in this latter regard, although we acknowledge that it can sometimes be difficult to separate disciplines from paradigms (e.g., in the case of “experimental psychology”). Another focus of our analysis is on the relative prominence of Anglo-Saxon countries like the US in psychological research. There have been frequent criticisms of the overreliance of Western samples in psychological research , but it is equally important that there is diversity in terms of authors’ cultural and ethnic background–not just within any diverse country (e.g., the US, Brazil, South Africa, etc.), but also between countries. Comparing the share of US contributions over time, a relative decrease over time has been reported both for the period between 1975 to 1994 and between 1996 to 2010 . To the best of our knowledge, however, no systematic analysis has focused on relative differences between world regions in the relative identification with psychological disciplines. It has been argued that cultural and economic background is relevant for the relative endorsement of certain scientific paradigms . For example, it has been argued that the drive model underlying psychoanalysis is a typically individualistic model that does not sufficiently take relational considerations into account . Whether there are indeed systematic differences between countries in their researchers’ self-identification with psychological disciplines is still an open question, however. Topical coverage of psychological disciplines Our second research question pertains to differences between psychological disciplines in the kinds of topics that are specific to each discipline. We did not have firm predictions. Some domain-specific topics seemed obvious, for example, that mathematical psychology would be focused most on statistical techniques (e.g., multilevel modeling, structural equation modeling, etc.). Other than that, linkages of topics to disciplines is a complex endeavor and often depends more on established traditions than on logical classifications. For example, “attention” or “perception” are common topics for experimental disciplines, such as experimental psychology and neuroscience, but they could theoretically be also investigated in other disciplines (e.g., development of perception in children; attention bias in clinical psychology). Because of this, we did not derive firm predictions regarding the topics that would emerge as discipline-specific. For the same reason, we did not have good reasons to expect certain topics to be more multidisciplinary than others. The only exception was the discipline of personality psychology, which has been identified (Yang and Chiu actually based their conclusion on the hub position of the Journal of Personality and Social Psychology , which is (by its name) a mixed journal. However, their subsequent interpretation relied most directly on personality psychology as a unifying discipline that studies the “whole person”.) by Yang and Chiu as a so-called “hub science”–a discipline that produces knowledge that is widely used by other disciplines. It might thus be expected that topics that are often studied in personality psychology might have a greater multidisciplinary appeal. Of note, however, Yang and Chiu used citations patterns related to APA flagship journals of different psychological disciplines, whereas we used citation patterns linked to individual researchers self-identifying with such disciplines, so it is unclear whether their results would generalize to the current study. Finally, we investigated whether the relative emphasis by researchers on certain research topics would resemble established meta-distinctions between psychological traditions. Cronbach has already remarked that psychology can be divided by approaches that use a correlational methodology versus approaches that use an experimental methodology. In correlational psychology, differences in people’s everyday behavior are investigated, oftentimes using survey methodology or observations. In experimental psychology, general processes are studied in controlled laboratory settings, oftentimes using reaction times and physiological signals as indicators (for an overview of distinctive features, see ). This broader distinction was recently validated in an empirical analysis of words appearing in abstracts of Dutch psychological articles: Clear “continents” (i.e., spatially clustered groups) of correlational versus experimental terms emerged . We thus expected that we would also find evidence for such distinctions in an analysis of research topics as endorsed in international GS profiles. Publication patterns of psychological disciplines Our third research question pertains to differences between disciplines in productivity and citation impact. Comparing the impact of different psychological disciplines can be useful if institutions must decide which psychological discipline to invest in or to devise strategies to achieve the most impact. Also, when comparing researchers from different psychological disciplines, it is important to know the average benchmark of these researchers’ disciplines to compare their relative performances. There are various sources of impact differences between disciplines. In the following, we discuss two of them: the centrality of the discipline and the robustness of its findings, though others might also apply. Regarding the former, when a discipline is a hub science, it receives citations from many other disciplines and accumulates more impact than disciplines that are more at the periphery of the discipline citation network. As stated above, Yang and Chiu identified personality psychology as a hub science, which might translate to more impact for scientists who identify with that discipline (see also ). Regarding the latter, disciplines might differ in the replicability of their findings. In a widely cited analysis , for example, cognitive psychology studies were found to be on average across several effects more frequently replicable than social psychology studies . In addition, findings from personality psychology have recently been identified as especially likely to replicate . Our first research question pertains to the relative endorsement of psychological disciplines across time and countries. The relative endorsement of psychological disciplines is defined as the relative percentage of psychological scientists that identify with a particular psychological discipline (as expressed in GS profiles). Robins, Gosling, and Craik studied changes in four schools in psychology: psychoanalytic, behavioral, cognitive, and neuroscience. They observed that the predominance of cognitive perspectives increased sharply during the 1970s, as measured by increasing use of corresponding keywords in articles and dissertations as well as a relative increase in the number of citations. Their analysis also showed a decline in prominence of psychoanalysis, and an unexpected lack of increase in prominence of the (then still rather nascent) neuroscience school. Note that the “schools” as studied by Robins et al. represent paradigms that can be theoretically applied to many (if not all) disciplines. For example, the cognitive paradigm can be applied to developmental psychology, social psychology, education psychology, and so forth. In contrast, disciplines additionally have organizational features, such as its own journals, conferences, and scientific associations, and oftentimes they are also reflected in institutional structures, such as the formation of separate units (e.g., departments). The current study investigates disciplines in this latter regard, although we acknowledge that it can sometimes be difficult to separate disciplines from paradigms (e.g., in the case of “experimental psychology”). Another focus of our analysis is on the relative prominence of Anglo-Saxon countries like the US in psychological research. There have been frequent criticisms of the overreliance of Western samples in psychological research , but it is equally important that there is diversity in terms of authors’ cultural and ethnic background–not just within any diverse country (e.g., the US, Brazil, South Africa, etc.), but also between countries. Comparing the share of US contributions over time, a relative decrease over time has been reported both for the period between 1975 to 1994 and between 1996 to 2010 . To the best of our knowledge, however, no systematic analysis has focused on relative differences between world regions in the relative identification with psychological disciplines. It has been argued that cultural and economic background is relevant for the relative endorsement of certain scientific paradigms . For example, it has been argued that the drive model underlying psychoanalysis is a typically individualistic model that does not sufficiently take relational considerations into account . Whether there are indeed systematic differences between countries in their researchers’ self-identification with psychological disciplines is still an open question, however. Our second research question pertains to differences between psychological disciplines in the kinds of topics that are specific to each discipline. We did not have firm predictions. Some domain-specific topics seemed obvious, for example, that mathematical psychology would be focused most on statistical techniques (e.g., multilevel modeling, structural equation modeling, etc.). Other than that, linkages of topics to disciplines is a complex endeavor and often depends more on established traditions than on logical classifications. For example, “attention” or “perception” are common topics for experimental disciplines, such as experimental psychology and neuroscience, but they could theoretically be also investigated in other disciplines (e.g., development of perception in children; attention bias in clinical psychology). Because of this, we did not derive firm predictions regarding the topics that would emerge as discipline-specific. For the same reason, we did not have good reasons to expect certain topics to be more multidisciplinary than others. The only exception was the discipline of personality psychology, which has been identified (Yang and Chiu actually based their conclusion on the hub position of the Journal of Personality and Social Psychology , which is (by its name) a mixed journal. However, their subsequent interpretation relied most directly on personality psychology as a unifying discipline that studies the “whole person”.) by Yang and Chiu as a so-called “hub science”–a discipline that produces knowledge that is widely used by other disciplines. It might thus be expected that topics that are often studied in personality psychology might have a greater multidisciplinary appeal. Of note, however, Yang and Chiu used citations patterns related to APA flagship journals of different psychological disciplines, whereas we used citation patterns linked to individual researchers self-identifying with such disciplines, so it is unclear whether their results would generalize to the current study. Finally, we investigated whether the relative emphasis by researchers on certain research topics would resemble established meta-distinctions between psychological traditions. Cronbach has already remarked that psychology can be divided by approaches that use a correlational methodology versus approaches that use an experimental methodology. In correlational psychology, differences in people’s everyday behavior are investigated, oftentimes using survey methodology or observations. In experimental psychology, general processes are studied in controlled laboratory settings, oftentimes using reaction times and physiological signals as indicators (for an overview of distinctive features, see ). This broader distinction was recently validated in an empirical analysis of words appearing in abstracts of Dutch psychological articles: Clear “continents” (i.e., spatially clustered groups) of correlational versus experimental terms emerged . We thus expected that we would also find evidence for such distinctions in an analysis of research topics as endorsed in international GS profiles. Our third research question pertains to differences between disciplines in productivity and citation impact. Comparing the impact of different psychological disciplines can be useful if institutions must decide which psychological discipline to invest in or to devise strategies to achieve the most impact. Also, when comparing researchers from different psychological disciplines, it is important to know the average benchmark of these researchers’ disciplines to compare their relative performances. There are various sources of impact differences between disciplines. In the following, we discuss two of them: the centrality of the discipline and the robustness of its findings, though others might also apply. Regarding the former, when a discipline is a hub science, it receives citations from many other disciplines and accumulates more impact than disciplines that are more at the periphery of the discipline citation network. As stated above, Yang and Chiu identified personality psychology as a hub science, which might translate to more impact for scientists who identify with that discipline (see also ). Regarding the latter, disciplines might differ in the replicability of their findings. In a widely cited analysis , for example, cognitive psychology studies were found to be on average across several effects more frequently replicable than social psychology studies . In addition, findings from personality psychology have recently been identified as especially likely to replicate . There are multiple ways to compare the impact of disciplines, each with their advantages and disadvantages: Using journal impact metrics, attending to institutional or organizational structures, or crawling publicly available author profiles. In the following, we will compare these different approaches. One way is to look at the average (or median) impact factors of journals that are associated with a discipline. For example, the Journal Citation Report [JCR; ] identifies 9 separate psychological categories and provides information about the average impact factor within these categories. For psychology, journal impact in the neurosciences is thus determined to be highest, and impact in psychoanalysis the lowest. Besides being frequently criticized as a problematic indicator of scholarly quality (e.g., ), a disadvantage of relying on the impact factor is that journal classification systems are typically domain-general, so their disciplinary divisions can appear somewhat haphazard. For example, JCR derives most categories from subject matter (e.g., social psychology, developmental psychology) but also includes some categories based on methodology (experimental psychology) or theoretical approach (psychoanalysis). Furthermore, a representative scholar of certain disciplines might publish in cross-domain outlets (e.g., Psychological Review or PNAS ) or in outlets of other disciplines (e.g., a personality psychologist publishing in Educational Psychology ). This hampers a comparison between disciplines if journal classifications are used. Impact across disciplines can also be compared by establishing individual indices from representative groups of researchers, such as within a certain university or scientific associations. For example, representatives of disciplines might be identified by means of their faculty affiliation (e.g., Department of Clinical Psychology). A potential problem, however, is that many universities do not have organizational structures that mirror the disciplinary organization of psychology. For example, many universities do not have a department of educational or personality psychology, although these are clearly recognizable sub-disciplines. Representatives of certain disciplines might also be drawn from lists of editorial boards of prominent disciplinary journals, or from the boards of learned associations. This has the disadvantage that only relatively prominent researchers are sampled, which would not allow for a fair comparison across different career stages. A third approach, which was adopted in the current paper, is to use researchers’ (self-)identified disciplines on a publicly available bibliometric search engine, such as Google Scholar (GS). This bibliometric resource was launched in 2004 and is free, popular, and widely used by psychological scientists today–and is therefore often preferred because it also captures non-journal publication outlets that are relevant for some disciplines but not others, such as conference proceedings . Since 2011, it is possible for researchers to create a profile that lists their contribution, and also list “areas of interest”, which are typically used to specify the researcher’s sub-discipline and/or topics of interest. Using self-identified sub-disciplines in GS has a number of key advantages. For example, it allows for researchers to describe up to five research topics in their own words, thus minimizing artificial or otherwise biased categorizations. The fact that multiple topics are possible also allows researchers to identify with more than one discipline. The GS scholar database also assigns a unique ID to each researcher, thus allowing longitudinal analysis of productivity and citation patterns. Finally, scholars of all career stages and backgrounds can create profiles on the platform, which increases the diversity of the overall pool of researchers that might endorse one of the targeted psychological disciplines. That said, the use of GS also has a number of important limitations, such as the fact that not all researchers have GS accounts, not all GS accounts specify one or more research topics, and not all specified topics can easily be assigned to one of the selected psychological sub-disciplines. We aimed to partially address these limitations through some exploratory analyses and will revisit them in the Discussion section. The current paper used researchers’ self-endorsed identifications with psychological disciplines as a starting point of a comprehensive scan of public profiles of psychological scientists. We used GS to identify researchers by means of labels related to 10 major psychological disciplines (psychoanalysis; clinical psychology; (cognitive) neuroscience; developmental psychology; educational psychology; experimental psychology; biological psychology/psychophysiology; mathematical psychology/psychometrics; social psychology; personality psychology). The researchers’ profile and citation data were then used to address three research questions. For Research Question 1 , we investigated distributions of self-endorsed disciplines so that we could empirically establish the relative frequency of researcher profiles with a multidisciplinary background as well as changes in prominence of the different disciplines over time. For Research Question 2 , we looked at self-endorsed labels of all profiles to identify topics that are characteristic for certain (groups of) disciplines, as well as topics that are highly cross-disciplinary. For Research Question 3 , we compared the impact of psychological disciplines, both in terms of average productivity per year as well as cumulative citation impact. To address these questions, we used the GS profiles to create average findings for the 10 psychological disciplines across hundreds of scholars each. Our approach has several features that set it apart from other literature. First, we took a broad approach focusing on all psychological disciplines but also went into depth regarding one discipline that is often left out of analyses: personality psychology. Furthermore, we used Google Scholar to compare disciplines, which has not been done before but has several advantages. For example, it allowed us to flesh out the topics that each discipline tackles and also to identify topics that are covered by multiple disciplines. Our findings can thus give rise to more constructive suggestions for topics that have the most potential for interdisciplinary collaboration. Furthermore, our method allows for the identification of linkages with the individual scholar as unit of analysis, so novel links can emerge (e.g., if topics covary in scholarly interest profiles but are typically investigated in separate papers). Finally, our analysis covers a broader timespan that has featured many important developments, for example, the increased globalization of academic scholarship. Our method also has important limitations, however, which we cover in the Discussion section. We share all data and code of this project on our Open Science Framework page https://osf.io/rj9ae/?view_only=022e120070514a748f7a3dab07dfefb8 so that others can reproduce our analyses. Because only public data were used, no ethical permission was deemed necessary. For privacy reasons, we refrained from sharing identifying information (such as GS identifiers, scientists’ given names, or the labels they endorsed) in our uploaded materials but all findings can still be reproduced with the shared materials. Procedure Extraction and processing of profiles For each of the 10 focal disciplines, GS profiles were identified by using the “label” function in searches. For example, for social psychology, “label: social_psychology” was entered in the search bar to identify profiles. Search results are displayed by GS in groups of ten, in descending order of the total number of citations for each researcher. We saved each page until the point when the number of citations per scholar on the corresponding results page became lower than 100. For “mathematical psychology” and “biological psychology”, the number of result pages was lower than 10, suggesting that scientists from these disciplines might perhaps use different terminology to identify their domain. By inspecting the journal titles from these categories in the Journal Citation Reports (JCR), we identified “psychometrics” and “psychophysiology” as alternative labels, which indeed produced sufficient results in GS. For neuroscience, we added the suffix “cognitive”, to ensure sampling of psychological researchers (in contrast to, for example, medical neuroscientists) and because the label “cognitive neuroscience” is frequently used in combination (e.g., as in the Cognitive Neuroscience Society). Crawling of citations and refinement of data We used automatized scripts to extract career information of each GS profiles, using the “comparison” function of the scholar R-package “scholar”, Version 0.1.5. . This package extracts the number of citations for each “career year” of different scholars. The crawling for all disciplines took place in February of 2019. Because of this, we only included citation data up until 2018. For every included GS identifier, the crawling produced one row per year in which the scholar was cited (i.e., long format data), with the number of citations and the “career year” (0 in the year of the first publication, 1 one year after the publication, etc.) of the scholar as additional columns. Further inspection indicated that it was beneficial to compute the year of first publication by hand, in addition to relying on the scholar package. This had two reasons: a) citations are only tracked from 1980 onwards in GS and b) the “scholar” package relies on the first bar of the “citations per year” graph although this method sometimes produces incorrect data. Further inspection indicated that the latter primarily happened when profiles from scholars (especially with common names) included publications from multiple scholars. Furthermore, this happened for some highly prominent and established researchers, for whom the citation counter started much later than the year of their actual first publication. We solved this by implementing an algorithm that extracted the first publication from the total publication output list for a researcher, looking for the year of the first publication that sufficed three criteria: a) featuring the author’s last name in the list of authors, b) attracting at least 10 citations, and c) being followed by another publication within 3 years. To test the performance of this algorithm, we manually inspected 106 cases in which the discrepancy between the scholar estimate and our own algorithm exceeded 15 years. This inspection confirmed the validity of our algorithm. Because our algorithm relied on the last name of the author (see Criterion a), it did not work for 709 profiles (mostly due to naming issues, for example, when researchers added a suffix such as “PhD” to their name or included foreign characters in their name). For these profiles, we used the estimate produced by the scholar package, which was still justified: When predicting the classic GS estimate with our algorithmic alternative, the association was extremely high, β = .91, p < .001. Second, citation rates might depend on the quantity of papers. Accordingly, we extracted the number of rows resulting from the “get_publications” function of the “scholar” package. To transform this into a score of productivity, this number was divided by the difference between the author’s year of last publication and the author’s year of first publication. Because this variable had some extreme outliers, the maximum productivity score was capped at 40. Third, inspection of first results indicated that a non-trivial number of scholars participated in the multi-author publication about the replicability of psychological findings . This publication greatly inflated the citation count of 253 authors, many of them relatively junior, so the paper had the potential to strongly bias career progression estimates. Because participation in this paper was skewed across disciplines (with an over-representation from social psychology and some other disciplines), it seemed necessary to correct for this. Accordingly, we flagged scholars who were included in the author list, amounting to 46 scholars, and excluded their citations to the replication article. Sample Number of profiles We obtained 6,880 profiles by crawling GS using the above-described keywords. shows the total number of profiles for each of the 10 disciplines. Strikingly, cognitive neuroscience (29%) and social psychology (23%) were by far the largest disciplines. After merging duplicate profiles, we ended up with information about 6,532 researchers. Geographical spread of profiles GS encourages users to verify their email address, of which only the domain name is displayed in the public profiles. This information was provided by all but 270 scholars. In addition, 248 profiles were linked to an “.org” or “.com” email address, which could not be used as an indicator of geography. We used the remaining extensions as a proxy for the country of the authors’ primary affiliations. The resulting extensions were tabulated according to the email extension, which equaled the country code (e.g., “.fr” for France) except in one case: The “.edu” extension can theoretically be used by institutions in all countries but is predominantly used by US institutions. Results indicated that the USA (as indicated by the.edu extension) was by far the country with most author profiles, accounting for more than a third of all profiles. Furthermore, more than 50% of all profiles belonged to one of four Anglo-Saxon countries: The USA, the UK, Canada, and Australia. In fact, these countries occupied four spots in the Top 5 of having the most GS profiles. In recent years, however, the predominance of Anglo-Saxon countries in the total pool of profiles has decreased from about 70% in the 1980s to around 55% in 2010 (i.e., their relative predominance decreased by 21%), when this development stagnated (see S4 Fig in for a graphical depiction). One important explanation for this decrease in predominance is that the percentage of the world’s population that live in these countries also decreased from 7.3% in 1980 to 6.0% in 2018 (i.e., their relative population predominance decreased by 18%; https://data.worldbank.org ). Change over time in sample composition We checked the number of researcher profiles that were cited in any particular year. This number showed an exponential increase from 1980 (the earliest year for which citations were tracked) and 2015, after which it leveled off. This indicates an increasing popularity of GS as a way to organize and track one’s citations . Especially during its most rapid growth between 2000 and 2010, many novel profiles have been added to the portal, as indicated by the relative lack of increase in researchers’ average career age. After 2010, the average career age of profiles started increasing again, and is currently at 13.9 years (i.e., around a mid-career level). GS can thus be considered a good source of comparing the careers of scientists across a wide range of academic career stages. Extraction and processing of profiles For each of the 10 focal disciplines, GS profiles were identified by using the “label” function in searches. For example, for social psychology, “label: social_psychology” was entered in the search bar to identify profiles. Search results are displayed by GS in groups of ten, in descending order of the total number of citations for each researcher. We saved each page until the point when the number of citations per scholar on the corresponding results page became lower than 100. For “mathematical psychology” and “biological psychology”, the number of result pages was lower than 10, suggesting that scientists from these disciplines might perhaps use different terminology to identify their domain. By inspecting the journal titles from these categories in the Journal Citation Reports (JCR), we identified “psychometrics” and “psychophysiology” as alternative labels, which indeed produced sufficient results in GS. For neuroscience, we added the suffix “cognitive”, to ensure sampling of psychological researchers (in contrast to, for example, medical neuroscientists) and because the label “cognitive neuroscience” is frequently used in combination (e.g., as in the Cognitive Neuroscience Society). Crawling of citations and refinement of data We used automatized scripts to extract career information of each GS profiles, using the “comparison” function of the scholar R-package “scholar”, Version 0.1.5. . This package extracts the number of citations for each “career year” of different scholars. The crawling for all disciplines took place in February of 2019. Because of this, we only included citation data up until 2018. For every included GS identifier, the crawling produced one row per year in which the scholar was cited (i.e., long format data), with the number of citations and the “career year” (0 in the year of the first publication, 1 one year after the publication, etc.) of the scholar as additional columns. Further inspection indicated that it was beneficial to compute the year of first publication by hand, in addition to relying on the scholar package. This had two reasons: a) citations are only tracked from 1980 onwards in GS and b) the “scholar” package relies on the first bar of the “citations per year” graph although this method sometimes produces incorrect data. Further inspection indicated that the latter primarily happened when profiles from scholars (especially with common names) included publications from multiple scholars. Furthermore, this happened for some highly prominent and established researchers, for whom the citation counter started much later than the year of their actual first publication. We solved this by implementing an algorithm that extracted the first publication from the total publication output list for a researcher, looking for the year of the first publication that sufficed three criteria: a) featuring the author’s last name in the list of authors, b) attracting at least 10 citations, and c) being followed by another publication within 3 years. To test the performance of this algorithm, we manually inspected 106 cases in which the discrepancy between the scholar estimate and our own algorithm exceeded 15 years. This inspection confirmed the validity of our algorithm. Because our algorithm relied on the last name of the author (see Criterion a), it did not work for 709 profiles (mostly due to naming issues, for example, when researchers added a suffix such as “PhD” to their name or included foreign characters in their name). For these profiles, we used the estimate produced by the scholar package, which was still justified: When predicting the classic GS estimate with our algorithmic alternative, the association was extremely high, β = .91, p < .001. Second, citation rates might depend on the quantity of papers. Accordingly, we extracted the number of rows resulting from the “get_publications” function of the “scholar” package. To transform this into a score of productivity, this number was divided by the difference between the author’s year of last publication and the author’s year of first publication. Because this variable had some extreme outliers, the maximum productivity score was capped at 40. Third, inspection of first results indicated that a non-trivial number of scholars participated in the multi-author publication about the replicability of psychological findings . This publication greatly inflated the citation count of 253 authors, many of them relatively junior, so the paper had the potential to strongly bias career progression estimates. Because participation in this paper was skewed across disciplines (with an over-representation from social psychology and some other disciplines), it seemed necessary to correct for this. Accordingly, we flagged scholars who were included in the author list, amounting to 46 scholars, and excluded their citations to the replication article. For each of the 10 focal disciplines, GS profiles were identified by using the “label” function in searches. For example, for social psychology, “label: social_psychology” was entered in the search bar to identify profiles. Search results are displayed by GS in groups of ten, in descending order of the total number of citations for each researcher. We saved each page until the point when the number of citations per scholar on the corresponding results page became lower than 100. For “mathematical psychology” and “biological psychology”, the number of result pages was lower than 10, suggesting that scientists from these disciplines might perhaps use different terminology to identify their domain. By inspecting the journal titles from these categories in the Journal Citation Reports (JCR), we identified “psychometrics” and “psychophysiology” as alternative labels, which indeed produced sufficient results in GS. For neuroscience, we added the suffix “cognitive”, to ensure sampling of psychological researchers (in contrast to, for example, medical neuroscientists) and because the label “cognitive neuroscience” is frequently used in combination (e.g., as in the Cognitive Neuroscience Society). We used automatized scripts to extract career information of each GS profiles, using the “comparison” function of the scholar R-package “scholar”, Version 0.1.5. . This package extracts the number of citations for each “career year” of different scholars. The crawling for all disciplines took place in February of 2019. Because of this, we only included citation data up until 2018. For every included GS identifier, the crawling produced one row per year in which the scholar was cited (i.e., long format data), with the number of citations and the “career year” (0 in the year of the first publication, 1 one year after the publication, etc.) of the scholar as additional columns. Further inspection indicated that it was beneficial to compute the year of first publication by hand, in addition to relying on the scholar package. This had two reasons: a) citations are only tracked from 1980 onwards in GS and b) the “scholar” package relies on the first bar of the “citations per year” graph although this method sometimes produces incorrect data. Further inspection indicated that the latter primarily happened when profiles from scholars (especially with common names) included publications from multiple scholars. Furthermore, this happened for some highly prominent and established researchers, for whom the citation counter started much later than the year of their actual first publication. We solved this by implementing an algorithm that extracted the first publication from the total publication output list for a researcher, looking for the year of the first publication that sufficed three criteria: a) featuring the author’s last name in the list of authors, b) attracting at least 10 citations, and c) being followed by another publication within 3 years. To test the performance of this algorithm, we manually inspected 106 cases in which the discrepancy between the scholar estimate and our own algorithm exceeded 15 years. This inspection confirmed the validity of our algorithm. Because our algorithm relied on the last name of the author (see Criterion a), it did not work for 709 profiles (mostly due to naming issues, for example, when researchers added a suffix such as “PhD” to their name or included foreign characters in their name). For these profiles, we used the estimate produced by the scholar package, which was still justified: When predicting the classic GS estimate with our algorithmic alternative, the association was extremely high, β = .91, p < .001. Second, citation rates might depend on the quantity of papers. Accordingly, we extracted the number of rows resulting from the “get_publications” function of the “scholar” package. To transform this into a score of productivity, this number was divided by the difference between the author’s year of last publication and the author’s year of first publication. Because this variable had some extreme outliers, the maximum productivity score was capped at 40. Third, inspection of first results indicated that a non-trivial number of scholars participated in the multi-author publication about the replicability of psychological findings . This publication greatly inflated the citation count of 253 authors, many of them relatively junior, so the paper had the potential to strongly bias career progression estimates. Because participation in this paper was skewed across disciplines (with an over-representation from social psychology and some other disciplines), it seemed necessary to correct for this. Accordingly, we flagged scholars who were included in the author list, amounting to 46 scholars, and excluded their citations to the replication article. Number of profiles We obtained 6,880 profiles by crawling GS using the above-described keywords. shows the total number of profiles for each of the 10 disciplines. Strikingly, cognitive neuroscience (29%) and social psychology (23%) were by far the largest disciplines. After merging duplicate profiles, we ended up with information about 6,532 researchers. Geographical spread of profiles GS encourages users to verify their email address, of which only the domain name is displayed in the public profiles. This information was provided by all but 270 scholars. In addition, 248 profiles were linked to an “.org” or “.com” email address, which could not be used as an indicator of geography. We used the remaining extensions as a proxy for the country of the authors’ primary affiliations. The resulting extensions were tabulated according to the email extension, which equaled the country code (e.g., “.fr” for France) except in one case: The “.edu” extension can theoretically be used by institutions in all countries but is predominantly used by US institutions. Results indicated that the USA (as indicated by the.edu extension) was by far the country with most author profiles, accounting for more than a third of all profiles. Furthermore, more than 50% of all profiles belonged to one of four Anglo-Saxon countries: The USA, the UK, Canada, and Australia. In fact, these countries occupied four spots in the Top 5 of having the most GS profiles. In recent years, however, the predominance of Anglo-Saxon countries in the total pool of profiles has decreased from about 70% in the 1980s to around 55% in 2010 (i.e., their relative predominance decreased by 21%), when this development stagnated (see S4 Fig in for a graphical depiction). One important explanation for this decrease in predominance is that the percentage of the world’s population that live in these countries also decreased from 7.3% in 1980 to 6.0% in 2018 (i.e., their relative population predominance decreased by 18%; https://data.worldbank.org ). Change over time in sample composition We checked the number of researcher profiles that were cited in any particular year. This number showed an exponential increase from 1980 (the earliest year for which citations were tracked) and 2015, after which it leveled off. This indicates an increasing popularity of GS as a way to organize and track one’s citations . Especially during its most rapid growth between 2000 and 2010, many novel profiles have been added to the portal, as indicated by the relative lack of increase in researchers’ average career age. After 2010, the average career age of profiles started increasing again, and is currently at 13.9 years (i.e., around a mid-career level). GS can thus be considered a good source of comparing the careers of scientists across a wide range of academic career stages. We obtained 6,880 profiles by crawling GS using the above-described keywords. shows the total number of profiles for each of the 10 disciplines. Strikingly, cognitive neuroscience (29%) and social psychology (23%) were by far the largest disciplines. After merging duplicate profiles, we ended up with information about 6,532 researchers. GS encourages users to verify their email address, of which only the domain name is displayed in the public profiles. This information was provided by all but 270 scholars. In addition, 248 profiles were linked to an “.org” or “.com” email address, which could not be used as an indicator of geography. We used the remaining extensions as a proxy for the country of the authors’ primary affiliations. The resulting extensions were tabulated according to the email extension, which equaled the country code (e.g., “.fr” for France) except in one case: The “.edu” extension can theoretically be used by institutions in all countries but is predominantly used by US institutions. Results indicated that the USA (as indicated by the.edu extension) was by far the country with most author profiles, accounting for more than a third of all profiles. Furthermore, more than 50% of all profiles belonged to one of four Anglo-Saxon countries: The USA, the UK, Canada, and Australia. In fact, these countries occupied four spots in the Top 5 of having the most GS profiles. In recent years, however, the predominance of Anglo-Saxon countries in the total pool of profiles has decreased from about 70% in the 1980s to around 55% in 2010 (i.e., their relative predominance decreased by 21%), when this development stagnated (see S4 Fig in for a graphical depiction). One important explanation for this decrease in predominance is that the percentage of the world’s population that live in these countries also decreased from 7.3% in 1980 to 6.0% in 2018 (i.e., their relative population predominance decreased by 18%; https://data.worldbank.org ). We checked the number of researcher profiles that were cited in any particular year. This number showed an exponential increase from 1980 (the earliest year for which citations were tracked) and 2015, after which it leveled off. This indicates an increasing popularity of GS as a way to organize and track one’s citations . Especially during its most rapid growth between 2000 and 2010, many novel profiles have been added to the portal, as indicated by the relative lack of increase in researchers’ average career age. After 2010, the average career age of profiles started increasing again, and is currently at 13.9 years (i.e., around a mid-career level). GS can thus be considered a good source of comparing the careers of scientists across a wide range of academic career stages. Preliminary investigation of sampling coverage As stated in the introduction, one important limitation of our approach is that it restricts our sample to researchers who a) have a GS profile and b) describe their research focus in the profile, and that c) assigns them to a discipline only if they used a relatively narrow discipline label to describe their focus. To evaluate the effect of these restrictions, we extracted the lists of (associate) editors all journals dedicated solely to personality psychology (and not also to social psychology or any other discipline or topic): Journal of Individual Differences , Journal of Personality , Personality and Individual Differences , European Journal of Personality , the Personality Processes and Individual Differences section of the Journal of Personality and Social Psychology , the Journal of Research in Personality , and Personality Science . On the websites of these journals’ editorial boards, we found 102 unique names (see Appendix A in ). Of these individuals, 90 (88%) had a Google Scholar profile, and 81 (79%) used keywords on their profile. Of those 81 people, 28 (35%) used “personality” as a keyword, 10 (12%) used “individual differences”, and 9 (11%) used “personality psychology. This unsystematic search illustrates that our approach was able to identify a non-trivial but relatively low percentage of relevant profiles. We will elaborate on the limitations of this coverage rate in the Discussion and also repeated the analyses using “personality” and “individual differences” as additional keywords. For now, we proceeded with the analyses under the (seemingly reasonable) assumption that psychological disciplines would not systematically differ in the coverage rate. Still, it needs to be kept in mind that our results underestimate the actual figures, and that their greatest value therefore lie in the relative comparison between disciplines and over time, which we focus on in our study. As a second check, we sampled all the journals of the 120 personality researchers. We conducted this analysis more than a year after crawling the original data collection. In the meanwhile, one researcher no longer had a GS profile, for two researchers we could not identity publications that fit our criteria (e.g., cited at least 10 times), and the publications of 7 researchers were primarily published in foreign languages, which we determined via the R-package textcat . We then searched each publication outlet for occurrence of either “personality” or “individual differences” and computed the average percentage of such disciplinary publications for each of the 110 researchers. On average, 34.6% of the outlets in which the self-identified personality researchers published contained a corresponding keyword. In all but seven cases, at least one publication appeared in a corresponding outlet. This supports the validity of using self-identified labels to assign researchers to disciplines, but the imperfect overlap also highlights the fact that a focus on self-identification produces substantially different results than a focus on journals as a way to classify scholarly work in disciplines. We return to this issue in the Discussion. Research Question 1: Relative endorsement and international representation of disciplines As a first step, we analyzed the degree of cross-identification across the 10 disciplines in Google Scholar. We thus established that the average percentage across disciplines was 15%. That is, the “average” discipline consisted of six out of seven members who only endorsed that discipline, and 1 out of seven members who also endorsed at least one other discipline. This number is somewhat misleading, however, because the percentage was lower in the larger disciplines, and the average cross-discipline percentage was also inflated by profiles that endorsed more than two disciplines (see the note to ). On average, actually only 5% of all profiles endorsed more than one discipline. Note that these profiles were moved to a new and manually computed category of “multidisciplinary psychology” to ensure that every researcher profile was only assigned to a single discipline. This also avoided the potential bias that citations in one discipline would also count for citations in another psychological discipline, which might have biased our comparisons. As can be seen in , disciplines varied widely in the degree of their multidisciplinarity. Whereas cognitive neuroscience, psychometrics, social psychology, clinical psychology, and psychoanalysis scored below 10%, personality psychology was an outlier in the other direction, with 43% of the researchers in this discipline endorsing at least one other domain. Further analysis indicated that 38 out of 52 multidisciplinary personality psychologists endorsed social psychology, but the remaining 14 researchers were dispersed across all other disciplines. This finding mirrors that in some countries (e.g., the US) social and personality psychology are thematically and institutionally concatenated . It should be noted, however, that results for personality psychology were somewhat attenuated if the more expansive set of keywords was used (see ). Specifically, the expanded set resulted in “only” 22% multidisciplinary profiles, which is still relatively high but comparable with other disciplines, such as experimental psychology. Another aspect of Research Question 1 was to investigate relative shifts in disciplinary endorsement over historical time. We analyzed this question by first aggregating relative frequencies of all disciplines for each year between 1980 and 2018. We then checked in a multi-level analysis for between-discipline differences by testing the significance of the interaction between discipline and year in a long data format (average endorsements of each discipline nested within each year between 1980 and 2018) and found a highly significant interaction effect, χ 2 ( df = 10) = 817.04, p < .001. We plotted the interaction in and found striking patterns. Psychometrics and cognitive neuroscience showed a strong increase over time. Also, the proportion of researchers endorsing multiple disciplines increased over time. Developmental psychology also declined somewhat around the year 2000. In each of these cases, the increases seem to have plateaued in recent years. In contrast, social psychology and psychoanalysis showed a marked decline but without clear signs (yet) of the decline leveling off. Not much historical change was found for the narrow set of personality psychology profiles. However, when the expanded three-keyword set was used, the prominence of personality psychology first decreased until the mid-nineties, after which it has been slowly but steadily increasing again (see S1 Fig in ). Research Question 2: Topical coverage of psychological disciplines For Research Question 2, we looked at the frequency of endorsed topic labels across profiles. Whereas all profiles had, by definition, at least one label (i.e., the label of the discipline), 84% of profiles had at least two labels, 72% had at least three labels, 51% had at least four labels, and 30% had the maximum of five labels. A preliminary analysis indicated that some frequent labels were redundant (e.g., psychotherapy research vs. psychotherapy ), so we collapsed across them. Subsequently, we identified the top 10 most endorsed topics for a) psychology in general, b) each discipline in particular, and c) multidisciplinary profiles. As can be seen in , results indicated a wide variety of topics. For psychology as a whole, the ten most common topics were (in decreasing order): emotion , neuroimaging , health psychology , memory , attention , social cognition , judgment and decision-making , personality , fMRI , and statistics . Nevertheless, even these more frequent topics were only endorsed by between 3.1% ( emotion ) and 1.4% ( statistics ) of profiles, respectively. In , the 10 most frequent topics are also listed for each discipline. As can be seen, this produced a face-valid “topic profile” for each discipline. Interestingly, there were stark differences between disciplines in the distribution of topics, with only some disciplines having a clear “signature” topic (defined as being endorsed by at least 10% of profiles within that discipline). Specifically, and not surprisingly, psychotherapy emerged as signature topic for psychoanalysis, emotion for psychophysiology, and statistics for psychometrics. Particularly the topics emotion , health psychology , and personality were highly multidisciplinary, as evidenced by their prominence in many top 10 lists, including the one from researchers with a multidisciplinary profile. Profiles focusing on psychopathology and psychotherapy also appeared relatively often in the lists of most frequent topics. We further proceeded to create a list with all 487 topics that had been endorsed at least 5 times (excluding the discipline labels), with columns indicating the absolute frequencies of endorsement within each discipline (log-transformed to reduce their skew). The co-occurrence of the vectors indicating relative topic endorsement in GS profiles across the 11 discipline columns can be expressed as a correlation matrix (487 topic frequencies × 11 disciplines). To examine whether this correlation matrix can be reduced to a smaller subset of “meta-disciplines”, we conducted a factor analysis. Specifically, we ran a parallel analysis that indicated three factors, which was consistent with the visual inspection of a scree-plot. However, the three-factor solution produced an isolated factor with only one substantial loading greater than .40 (for clinical psychology). Because of this reason, we instead extracted two factors using principal axis factoring. Because there were several topics that were endorsed by multiple disciplines, orthogonality between disciplines would be an untenable assumption, so we applied an oblimin rotation. Inspection of factor loadings, as presented in , indicated that the first factor was dominated by cognitive neuroscience and experimental psychology, whereas the second factor was dominated by personality psychology and clinical psychology (with substantial loadings also for psychophysiology, psychometrics, social psychology, and developmental psychology). However, some disciplines (particularly educational psychology and psychoanalysis) were not well covered by this two-factor solution. The corresponding factor solution might be qualified as “weak”, yet this is expected because a stronger solution would invalidate the existence of separate disciplines. As can be seen in S3 Table of , the factor solution was basically the same when using an expanded set of personality keywords. Research Question 3: Publication patterns of psychological disciplines To address Research Question 3, we first compared the average output per discipline (i.e., productivity). An ANOVA with discipline as a factor produced a highly significant difference, F (10, 6.442) = 11.31, p < .001, η 2 = 0.02. In , we plotted these differences as well as the overall distribution of productivity. As can be seen by the cumulation of data points at the lower end of the distribution, the productivity distribution resembled a power law distribution, with most researchers publishing less than 5 papers per year, but a smaller number of researchers publishing (much) more. Mean and median productivity across disciplines are displayed in . From this analysis, it emerged that cognitive neuroscience, developmental psychology, and social psychology were relatively low in productivity. By comparison, personality psychology, psychometrics, multidisciplinary psychology, psychophysiology, and clinical psychology had higher productivity levels. As can be seen in S4 Table of , productivity of personality psychologists were more in line with the average when using the expanded keyword set to identity them. We finally compared disciplines in their ability to attract citations as a function of career progression and productivity. To begin, we ran a multilevel regression model with main effects of career year and discipline, and compared the fit with a model that additionally included their interaction. From this analysis, it turned out that the interaction was highly statistically significant, χ 2 ( df = 10) = 2229.20, p < .001. In the regression analysis, the interaction between a continuous variable (career year) and a categorical variable (discipline) is technically handled by converting the categorical variable into a series of dummy contrasts that indicate how the corresponding slope of each factor level differs from the slope of the reference category. In the present case, we chose psychoanalysis as the reference category because this was the discipline with the lowest impact in JCR. The slope for each discipline, corresponding to the increase in citations per career year of its members, thus consists of the reference slope plus the discipline-specific interaction effect. For validation purposes, we compared these estimates with the discipline-specific aggregate impact figures from the 2019 JCR. We show the results in . Although this analysis was only based on 10 cases (personality psychology is not a separate category in JCR), the correlation was .59, p = .04 (one-sided). This strengthened our faith in our regression-based approach. As can be seen in and , personality psychology was the discipline with the greatest citation increase, with cognitive neuroscience, multidisciplinary psychology, psychometrics, and social psychology also attracting many citations. In contrast, the impact of psychoanalysis, clinical psychology, experimental psychology, and psychophysiology was much less. Unexpectedly, as can be seen in , the smoothed line for personality psychology had a wider confidence interval as the other disciplines, an issue to which we will return in the Discussion section. When replicating these analyses with the expanded personality psychology keyword set, however, this discipline still ended up in a high position but not substantially different from the impact of cognitive neuroscience and multidisciplinary psychology. As can be seen in S3 Fig of , the confidence intervals of the smoothed line describing the development of yearly citation volume by academic age also became more comparable with other disciplines. To check the robustness of the results, we also ran an additional analysis with career year as a within-person centered variable to account for the possibility that differences in career lengths across disciplines might account for the results. This was not the case, however, as the pattern of results was almost identical. Furthermore, because of between-discipline differences in productivity, we wanted to investigate the interaction between career year, discipline, and productivity to estimate citation increases relative to one unit of productivity. Also for this analysis, the pattern of results was strikingly similar with the exception of developmental psychology, which emerged in a much stronger position, likely because its impact was adjusted upwards in light of its relatively low productivity. Finally, inspection of the most impactful researchers in personality psychology indicated that a Nobel prize laureate (James Heckman), who identified with personality psychology in GS, was included among them, which might have biased findings. Even after excluding Heckman from the analyses, however, personality psychology was still the most impactful discipline in all cases (although its confidence interval did overlap with some other disciplines; see S6 Table in ). As stated in the introduction, one important limitation of our approach is that it restricts our sample to researchers who a) have a GS profile and b) describe their research focus in the profile, and that c) assigns them to a discipline only if they used a relatively narrow discipline label to describe their focus. To evaluate the effect of these restrictions, we extracted the lists of (associate) editors all journals dedicated solely to personality psychology (and not also to social psychology or any other discipline or topic): Journal of Individual Differences , Journal of Personality , Personality and Individual Differences , European Journal of Personality , the Personality Processes and Individual Differences section of the Journal of Personality and Social Psychology , the Journal of Research in Personality , and Personality Science . On the websites of these journals’ editorial boards, we found 102 unique names (see Appendix A in ). Of these individuals, 90 (88%) had a Google Scholar profile, and 81 (79%) used keywords on their profile. Of those 81 people, 28 (35%) used “personality” as a keyword, 10 (12%) used “individual differences”, and 9 (11%) used “personality psychology. This unsystematic search illustrates that our approach was able to identify a non-trivial but relatively low percentage of relevant profiles. We will elaborate on the limitations of this coverage rate in the Discussion and also repeated the analyses using “personality” and “individual differences” as additional keywords. For now, we proceeded with the analyses under the (seemingly reasonable) assumption that psychological disciplines would not systematically differ in the coverage rate. Still, it needs to be kept in mind that our results underestimate the actual figures, and that their greatest value therefore lie in the relative comparison between disciplines and over time, which we focus on in our study. As a second check, we sampled all the journals of the 120 personality researchers. We conducted this analysis more than a year after crawling the original data collection. In the meanwhile, one researcher no longer had a GS profile, for two researchers we could not identity publications that fit our criteria (e.g., cited at least 10 times), and the publications of 7 researchers were primarily published in foreign languages, which we determined via the R-package textcat . We then searched each publication outlet for occurrence of either “personality” or “individual differences” and computed the average percentage of such disciplinary publications for each of the 110 researchers. On average, 34.6% of the outlets in which the self-identified personality researchers published contained a corresponding keyword. In all but seven cases, at least one publication appeared in a corresponding outlet. This supports the validity of using self-identified labels to assign researchers to disciplines, but the imperfect overlap also highlights the fact that a focus on self-identification produces substantially different results than a focus on journals as a way to classify scholarly work in disciplines. We return to this issue in the Discussion. As a first step, we analyzed the degree of cross-identification across the 10 disciplines in Google Scholar. We thus established that the average percentage across disciplines was 15%. That is, the “average” discipline consisted of six out of seven members who only endorsed that discipline, and 1 out of seven members who also endorsed at least one other discipline. This number is somewhat misleading, however, because the percentage was lower in the larger disciplines, and the average cross-discipline percentage was also inflated by profiles that endorsed more than two disciplines (see the note to ). On average, actually only 5% of all profiles endorsed more than one discipline. Note that these profiles were moved to a new and manually computed category of “multidisciplinary psychology” to ensure that every researcher profile was only assigned to a single discipline. This also avoided the potential bias that citations in one discipline would also count for citations in another psychological discipline, which might have biased our comparisons. As can be seen in , disciplines varied widely in the degree of their multidisciplinarity. Whereas cognitive neuroscience, psychometrics, social psychology, clinical psychology, and psychoanalysis scored below 10%, personality psychology was an outlier in the other direction, with 43% of the researchers in this discipline endorsing at least one other domain. Further analysis indicated that 38 out of 52 multidisciplinary personality psychologists endorsed social psychology, but the remaining 14 researchers were dispersed across all other disciplines. This finding mirrors that in some countries (e.g., the US) social and personality psychology are thematically and institutionally concatenated . It should be noted, however, that results for personality psychology were somewhat attenuated if the more expansive set of keywords was used (see ). Specifically, the expanded set resulted in “only” 22% multidisciplinary profiles, which is still relatively high but comparable with other disciplines, such as experimental psychology. Another aspect of Research Question 1 was to investigate relative shifts in disciplinary endorsement over historical time. We analyzed this question by first aggregating relative frequencies of all disciplines for each year between 1980 and 2018. We then checked in a multi-level analysis for between-discipline differences by testing the significance of the interaction between discipline and year in a long data format (average endorsements of each discipline nested within each year between 1980 and 2018) and found a highly significant interaction effect, χ 2 ( df = 10) = 817.04, p < .001. We plotted the interaction in and found striking patterns. Psychometrics and cognitive neuroscience showed a strong increase over time. Also, the proportion of researchers endorsing multiple disciplines increased over time. Developmental psychology also declined somewhat around the year 2000. In each of these cases, the increases seem to have plateaued in recent years. In contrast, social psychology and psychoanalysis showed a marked decline but without clear signs (yet) of the decline leveling off. Not much historical change was found for the narrow set of personality psychology profiles. However, when the expanded three-keyword set was used, the prominence of personality psychology first decreased until the mid-nineties, after which it has been slowly but steadily increasing again (see S1 Fig in ). For Research Question 2, we looked at the frequency of endorsed topic labels across profiles. Whereas all profiles had, by definition, at least one label (i.e., the label of the discipline), 84% of profiles had at least two labels, 72% had at least three labels, 51% had at least four labels, and 30% had the maximum of five labels. A preliminary analysis indicated that some frequent labels were redundant (e.g., psychotherapy research vs. psychotherapy ), so we collapsed across them. Subsequently, we identified the top 10 most endorsed topics for a) psychology in general, b) each discipline in particular, and c) multidisciplinary profiles. As can be seen in , results indicated a wide variety of topics. For psychology as a whole, the ten most common topics were (in decreasing order): emotion , neuroimaging , health psychology , memory , attention , social cognition , judgment and decision-making , personality , fMRI , and statistics . Nevertheless, even these more frequent topics were only endorsed by between 3.1% ( emotion ) and 1.4% ( statistics ) of profiles, respectively. In , the 10 most frequent topics are also listed for each discipline. As can be seen, this produced a face-valid “topic profile” for each discipline. Interestingly, there were stark differences between disciplines in the distribution of topics, with only some disciplines having a clear “signature” topic (defined as being endorsed by at least 10% of profiles within that discipline). Specifically, and not surprisingly, psychotherapy emerged as signature topic for psychoanalysis, emotion for psychophysiology, and statistics for psychometrics. Particularly the topics emotion , health psychology , and personality were highly multidisciplinary, as evidenced by their prominence in many top 10 lists, including the one from researchers with a multidisciplinary profile. Profiles focusing on psychopathology and psychotherapy also appeared relatively often in the lists of most frequent topics. We further proceeded to create a list with all 487 topics that had been endorsed at least 5 times (excluding the discipline labels), with columns indicating the absolute frequencies of endorsement within each discipline (log-transformed to reduce their skew). The co-occurrence of the vectors indicating relative topic endorsement in GS profiles across the 11 discipline columns can be expressed as a correlation matrix (487 topic frequencies × 11 disciplines). To examine whether this correlation matrix can be reduced to a smaller subset of “meta-disciplines”, we conducted a factor analysis. Specifically, we ran a parallel analysis that indicated three factors, which was consistent with the visual inspection of a scree-plot. However, the three-factor solution produced an isolated factor with only one substantial loading greater than .40 (for clinical psychology). Because of this reason, we instead extracted two factors using principal axis factoring. Because there were several topics that were endorsed by multiple disciplines, orthogonality between disciplines would be an untenable assumption, so we applied an oblimin rotation. Inspection of factor loadings, as presented in , indicated that the first factor was dominated by cognitive neuroscience and experimental psychology, whereas the second factor was dominated by personality psychology and clinical psychology (with substantial loadings also for psychophysiology, psychometrics, social psychology, and developmental psychology). However, some disciplines (particularly educational psychology and psychoanalysis) were not well covered by this two-factor solution. The corresponding factor solution might be qualified as “weak”, yet this is expected because a stronger solution would invalidate the existence of separate disciplines. As can be seen in S3 Table of , the factor solution was basically the same when using an expanded set of personality keywords. To address Research Question 3, we first compared the average output per discipline (i.e., productivity). An ANOVA with discipline as a factor produced a highly significant difference, F (10, 6.442) = 11.31, p < .001, η 2 = 0.02. In , we plotted these differences as well as the overall distribution of productivity. As can be seen by the cumulation of data points at the lower end of the distribution, the productivity distribution resembled a power law distribution, with most researchers publishing less than 5 papers per year, but a smaller number of researchers publishing (much) more. Mean and median productivity across disciplines are displayed in . From this analysis, it emerged that cognitive neuroscience, developmental psychology, and social psychology were relatively low in productivity. By comparison, personality psychology, psychometrics, multidisciplinary psychology, psychophysiology, and clinical psychology had higher productivity levels. As can be seen in S4 Table of , productivity of personality psychologists were more in line with the average when using the expanded keyword set to identity them. We finally compared disciplines in their ability to attract citations as a function of career progression and productivity. To begin, we ran a multilevel regression model with main effects of career year and discipline, and compared the fit with a model that additionally included their interaction. From this analysis, it turned out that the interaction was highly statistically significant, χ 2 ( df = 10) = 2229.20, p < .001. In the regression analysis, the interaction between a continuous variable (career year) and a categorical variable (discipline) is technically handled by converting the categorical variable into a series of dummy contrasts that indicate how the corresponding slope of each factor level differs from the slope of the reference category. In the present case, we chose psychoanalysis as the reference category because this was the discipline with the lowest impact in JCR. The slope for each discipline, corresponding to the increase in citations per career year of its members, thus consists of the reference slope plus the discipline-specific interaction effect. For validation purposes, we compared these estimates with the discipline-specific aggregate impact figures from the 2019 JCR. We show the results in . Although this analysis was only based on 10 cases (personality psychology is not a separate category in JCR), the correlation was .59, p = .04 (one-sided). This strengthened our faith in our regression-based approach. As can be seen in and , personality psychology was the discipline with the greatest citation increase, with cognitive neuroscience, multidisciplinary psychology, psychometrics, and social psychology also attracting many citations. In contrast, the impact of psychoanalysis, clinical psychology, experimental psychology, and psychophysiology was much less. Unexpectedly, as can be seen in , the smoothed line for personality psychology had a wider confidence interval as the other disciplines, an issue to which we will return in the Discussion section. When replicating these analyses with the expanded personality psychology keyword set, however, this discipline still ended up in a high position but not substantially different from the impact of cognitive neuroscience and multidisciplinary psychology. As can be seen in S3 Fig of , the confidence intervals of the smoothed line describing the development of yearly citation volume by academic age also became more comparable with other disciplines. To check the robustness of the results, we also ran an additional analysis with career year as a within-person centered variable to account for the possibility that differences in career lengths across disciplines might account for the results. This was not the case, however, as the pattern of results was almost identical. Furthermore, because of between-discipline differences in productivity, we wanted to investigate the interaction between career year, discipline, and productivity to estimate citation increases relative to one unit of productivity. Also for this analysis, the pattern of results was strikingly similar with the exception of developmental psychology, which emerged in a much stronger position, likely because its impact was adjusted upwards in light of its relatively low productivity. Finally, inspection of the most impactful researchers in personality psychology indicated that a Nobel prize laureate (James Heckman), who identified with personality psychology in GS, was included among them, which might have biased findings. Even after excluding Heckman from the analyses, however, personality psychology was still the most impactful discipline in all cases (although its confidence interval did overlap with some other disciplines; see S6 Table in ). The present study set out to conduct a comprehensive scan of psychological disciplines via GS profiles, with the goal of mapping the numerical strength of disciplines, their thematic coverage, and publication patterns. This produced an invaluable picture of psychology as a field of research, as represented in GS, which is widely used among psychological scientists today. In the following, we discuss implications for each of our three research questions. Relative prominence of disciplines Our first research question concerned the numerical strength of the various disciplines as well as their co-occurrence within individual researcher profiles. Our analysis produced the striking finding that identifications with the discipline of cognitive neuroscience has seen a dramatic rise over the past 40 years, whereas those with social psychology have seen a strong decline (though the discipline has seen a large increase in absolute number of GS profiles, due to the large rise in profiles overall). In part, the relative rise in the proportion of profiles that identify with cognitive neuroscience and relative decline in the proportion of identification social psychology appear to be both sides of the same coin: If a greater proportion of researchers start to identify with cognitive neuroscience, a smaller proportion is left for identification with other disciplines (excluding endorsement of multiple disciplines, of course). This clearly did not happen with all disciplines, however. For example, the proportion of researchers identifying with mathematical psychology in their GS profiles clearly increased between 1980 and 1995. A follow-up analysis indicated that the increasing proportion of profiles from countries without an Anglo-Saxon background explained the decreasing proportion of profiles identifying with social psychology. Specifically, after including a variable indicating the percentage of Anglo-Saxon profiles in each year in the regression analysis, the negative association between calendar year and the proportion of social psychological profiles was no longer significant and constituted a relatively small effect size (partial R 2 = 0.03). The rise of identifications with cognitive neuroscience was not necessarily surprising, as already Robins et al. had reported evidence for an increasing prominence of both cognitive psychology and, to a certain extent, also neuropsychology. However, Robins and colleagues did not identify social psychology as a separate discipline, instead focusing on the school of behavioral psychology, which differs in focus from social psychology. The decline of identifications with social psychology has not appeared in previous studies and was surprising. As indicated by the additional analysis mentioned above, this decline seems mostly related to the fact that social psychology is becoming less representative of global psychology. With more and more psychological researchers joining the global research community, this might produce a different composition of psychological science in the not-so-distant future. We also investigated differences in multidisciplinary focus across disciplines. From our scan, it emerged that only 5% of profiles endorsed more than one discipline. There were large differences between disciplines, however. Particularly the very large disciplines of cognitive neuroscience and social psychology were less frequently endorsed in combination with other disciplines. This might be seen as logical if there is more topical breadth with such large disciplines and thus perhaps less need to affiliate with other disciplines. That said, the relatively small discipline of psychoanalysis was also not characterized by many cross-disciplinary profiles, so other factors (like the degree of disciplinary identification, segregation of topics and methods) seem to also play a role here. The large degree of multidisciplinary identification in personality psychology (although less striking when using a more expansive keyword set) is consistent with the conclusion by Yang and Chiu that personality is a hub discipline with connections to and from many other disciplines (see also ). However, institutional factors might also have played a role. Consistent with journals such as the Journal of Personality and Social Psychology as well as combined societies such as the Society for Personality and Social Psychology , almost three quarters of multidisciplinary personality profiles also endorsed social psychology. The remaining multidisciplinary profiles, however, endorsed combinations with all other disciplines. Finally, it should be noted that while almost 70% of all multidisciplinary personality psychologists endorsed social psychology, the reverse was not true: Only less than 3% of multidisciplinary social psychologists endorsed personality psychology, which is consistent with a recent analysis that identified personality psychology as only marginally related to social psychology . As stated, only a relative minority of profiles endorsed multiple disciplines. This might be partly a result of a binary identification tendency: the idea that it is useful to firmly identify with only one discipline. This may be enforced by institutional structures, academic positions, job postings, and tenure committees that often seek (highly) specialized scholars who can represent a given field. Needless to say, however, binary classifications also have practical components given the way that psychology and academic practices are organized. After all, attending conferences and creating collaboration networks within a discipline takes time, and therefore it is much more difficult to repeat across multiple disciplines. All of this notwithstanding, we found that the percentage of multidisciplinary psychologists doubled across the study period, from ca. 2.5% to 5.0%. We can only speculate about the origins of this trend but think that three factors might play a role. First, the increase in multidisciplinarity might have intrinsic reasons. For example, researchers might be driven to study “psychology as a whole” because they really want to understand the wholeness of human functioning and perhaps realize over time that this is not possible within the confines of only one discipline. Second, there might be more recent institutional pressures towards multidisciplinarity, for example, in the tendency of large funding agencies to favor multidisciplinary work. Third, the research community itself seems to yearn more and more–especially in the wake of the replicability crisis or credibility revolution –for increased cross-talk, sharing of data, and cooperation, resulting in multidisciplinary consortia and coordinated laboratories or studies. Topics of research Our second research question pertained to the topics that researchers endorsed on their profiles, in addition to the disciplines they identified with. An inspection of these topics indicated that the percentage of endorsement is likely an underestimation of actual research practices within a discipline. For example, only somewhat less than 7% of clinical psychologists endorsed psychotherapy , and only less than 7% of cognitive neuroscientists endorsed neuroimaging , even though these topics appear central to the disciplines in question. This likely reflects the degrees of freedom when creating a GS profile and the fact that the choice for certain labels might limit the perceived necessity to add additional terms (e.g., developmental psychologists apparently did not deem it necessary to include terms such as development or change ). As scholars can only publish 5 keywords, they need to take care in selecting keywords that are important to them, allow their easy identification (for themselves and others), and are not too redundant. Certain terms or concepts that overlap so strongly with a discipline and are implicitly contained in the discipline denomination are then likely omitted in most cases. There were large differences between disciplines in the relative frequencies of endorsing certain labels. Overall, only psychoanalysis, psychophysiology, and psychometrics featured topics that were endorsed by more than 10% of profiles ( psychotherapy , emotion , and statistics , respectively), whereas the relative endorsement in other disciplines was more diluted (e.g., only less than 5% of social psychologists endorsed social cognition ). Overall, it seemed that the larger disciplines (clinical psychology, cognitive neuroscience, and social psychology) had a somewhat stronger dilution of topics than many smaller disciplines, perhaps reflecting greater critical mass for further sub-discipline specialization. Surprisingly, especially developmental psychology seemed rather fragmented in terms of topic endorsement. Although speculative, this seems to reflect the combinatory power of the notion of development/change: Almost every psychological phenomenon changes with age/time, so a developmental psychologist can study an almost limitless array of topics. By comparison, other disciplines might be more constrained in their endorsements to certain key contents (e.g., a clinical psychologist might be more likely to study psychopathology ). By extracting all common topics across disciplines and counting relative endorsements of these topics per discipline, we also created “content vectors” for each discipline. By factor-analyzing these vectors, we established a novel method of mapping psychological research. Speaking to the face validity of our new method, our factor solution was reminiscent of the correlational versus experimental distinction that already Cronbach has identified and that was recently confirmed by Flis and van Eck using graphical mapping based on co-occurrence of terms in article abstracts. Based on topic endorsement frequencies, we indeed found a dimension dominated by cognitive neuroscience and experimental psychology, versus a second dimension dominated by clinical psychology, developmental psychology, personality psychology, psychophysiology, psychometrics, and social psychology. Of note, these dimensions differed from the results of Yang and Chiu, who found two dimensions: basic versus applied and population-specific versus population-general. It seemed that social psychology and psychophysiology, which used to have strong experimental traditions , are currently focusing on topics that are also studied by traditionally “correlational” disciplines, like developmental psychology. Although speculative, it might be that more experimentally minded researchers within social psychology and psychophysiology have been increasingly gravitating towards and identifying with the upcoming discipline of cognitive neuroscience–or that new researchers coming to GS self-identify with different labels. To identify shifts over time, longitudinal research is needed on researchers’ private and public self-identifications across their careers (to the best of our knowledge, this is currently not possible in GS as changes in keywords are not available to study). Additionally, the meaning of topics (e.g., emotion ) needs to be studies across time as meanings can change and carry different connotations. Finally, an overall inspection of topics across disciplines suggested that some topics were endorsed by more disciplines than others. Particularly three topics appeared in many top 10 lists: Personality , emotion , and health psychology . The relative prominence of personality as an overarching topic is perhaps not surprising based on earlier research that personality is an integrative topic studied across many disciplines . Of note, however, is that personality was not frequently endorsed by experimentally oriented psychologists, as defined above. As a matter of fact, only emotion appeared also in the top 10 list of the experimental disciplines. This topic therefore seems very promising for multidisciplinary approaches and interdisciplinary integration. Indeed, experimental researchers could study the effects of emotional states on other psychological processes, social psychologists could study the effects of emotions on social outcomes, clinical psychologists could study negative emotions such as shame and depressed affect, and so forth. Publication patterns As part of our third research question, we also compared psychological disciplines in terms of productivity and impact. Regarding productivity, we found that the typical (median) psychological researcher publishes 3–4 papers per year (which are indexed in GS). Some researchers, however, publish much more than this (e.g., 11% of all researchers published 10 papers or more), thus producing a skewed distribution that corresponds to a power law. Productivity differences between disciplines were also found, with cognitive neuroscience and developmental psychology being somewhat less productive than personality psychology (only using a narrow keyword selection) and psychometrics. This might reflect the greater necessary investment in sampling in the former disciplines, with fMRI experiments and longitudinal studies being quite time-consuming to set up. By comparison, in more “productive” fields it might be more common to include additional co-authors on papers, resulting in higher numbers of papers per year. We also compared the citation impact of the various disciplines both in terms of citation increases per year as well as in citation increases per year and publication unit (e.g., paper). Our results suggested relatively large differences between the disciplines. Results also converged with the impact statistics of the JCR, with one exception: Cognitive neuroscience did not obtain the strong citation impact that would have been predicted based on the average journal impact factors in that domain. This might not be that surprising, however, because many journals in the Neuroscience domain of Web of Science are rather medical and/or biological journals and thus from fields where impact factors tend to be higher. For cognitive neuroscientists publishing in these journals, however, research impact seems about similar to that of other psychologists, as, for example, suggested by the comparability of impact between the two largest disciplines of social psychology and cognitive neuroscience. Psychoanalysis emerged as the discipline with the lowest impact, and also experimental and psychophysiology appeared somewhat lower in impact. Regarding psychoanalysis, the relatively low impact might reflect the earlier finding by Robins et al. that this school of thought has gotten out of fashion. Consistent with this, the relative prominence of psychoanalysis visibly declined also in our analysis. Moreover, the median first year of publication for psychoanalytic profiles was markedly lower (1998) than for the other disciplines (range 2005–2007). This was not true for experimental psychology and psychophysiology, however. Because more recent technological advances, such as fMRI and neuroimaging, were less frequently endorsed by these disciplines (when compared to cognitive neuroscientists), it is possible that differences in infrastructure can explain differences in impact, but this remains speculative without further research. Using an expanded set of personality keywords, personality psychology but also cognitive neuroscience and multidisciplinary psychology emerged as particularly high in impact. In the introduction, we speculated that interdisciplinary focus and replicability of findings might contribute to impact. In line with this, both personality psychology and cognitive psychology have been highlighted as being especially robust . In terms of interdisciplinary focus, the high impact of personality psychology is reminiscent of earlier claims that personality psychology is a hub science that attracts citations from different areas. Consistent with this, we empirically established that also multidisciplinary psychology (defined as endorsing multiple disciplines on one’s GS profile) was associated with a relatively large citation impact. However, speaking against this speculation is the fact that a) the high impact of personality psychology was less evident when an expanded set of keywords was used, b) cognitive neuroscience also demonstrated strong citation impact (particularly when compared to the expanded set of personality keywords) but its members less often endorsed other disciplines, and c) experimental psychology demonstrated weaker citation impact although its members were more likely to endorse other disciplines. In theory, the idea that multi-disciplinary research has stronger impact makes sense: If a discipline produces findings that are relevant for many other disciplines, that discipline can accumulate more citations than a more “isolated” discipline. On a substantive level, personality psychology is concerned with various psychological variables within the “whole person” and might therefore be particularly suited to play a multi- and inter-disciplinary role. That said, it was striking that the topic of personality was not frequently endorsed by cognitive neuroscientists or experimental psychologists and therefore seems to primarily occupy a hub-position within correlational psychology. It is an interesting question whether there might be other, hitherto undiscovered hub-positions within experimental disciplines as well (e.g., focusing on whole-brain functioning or on interactions between psychological functions), but these were not discovered by the current analyses. It might also be the case that cognitive neuroscience itself can qualify as such a hub position within the experimental approach because it emerged as a rather strong marker of that domain in our factor analysis, as opposed to the more fragmented nature of the correlational approach. Moreover, cognitive neuroscience might be multi-disciplinary at a higher-order level, integrating knowledge from biology, medicine, engineering, and mathematics. One interesting, unexpected finding was that the variance in impact of personality researchers in was much higher than the variance of other disciplines. This partly reflected the discipline’s smaller size, because the pattern was not visible when a more expanded set of keywords was used (see S3 Fig in ). However, the same phenomenon did not occur for the discipline of psychoanalysis, which is almost equal in size. Rather, it is possible that in personality psychology there is a relatively larger likelihood of developing an exceptionally well-cited profile, when compared to other disciplines. In other words, whereas many personality psychologists appear to follow relatively average trajectories, a sizable minority deviated from this norm and were cited many times more often. Although speculative, perhaps this pattern is due to a combination of the status of personality psychology as a hub science and its relatively small size. This combination would make it easier for clear “topic leaders” to emerge, who are then cited widely not just within personality psychology but also in other disciplines. Strengths and limitations Our study had several strengths. To the best of our knowledge, we are the first to systematically scan the entire field of psychology without relying on a classification of journals. This is important because not all authors of psychology journals are psychologists, and conversely not all psychology researchers publish in psychology journals. Instead, we focused on disciplinary endorsement, which has the advantage of focusing attention on the content areas that psychological researchers themselves identify with. Using the entire scope of self-endorsed topic labels, we could therefore obtain a fuller picture of the different topics that are studied within psychology and also how they are combined. Also, a clear strength of our approach is that we included a relatively large and diverse sample of researcher profiles, which was leveraged by the fact that each profile included multiple data points per year. Using these rich data, we could compute novel impact statistics, such as citation increases per year while controlling for between-discipline differences in productivity. Finally, we employed a novel and potentially more precise index of comparative scientific impact that takes into account differences in researchers’ career stage and quantitative publication output, which differs between psychological disciplines as we have found through our results. That said, our approach also had clear limitations. Most obviously, we were limited to sampling profiles of researchers who a) created a GS profile in the first place, b) used labels to describe their research (this is not required by GS), c) used the labels that we identified as markers of each discipline, d) formulated these labels in English, and e) had more than 100 citations in GS. This clearly produced a somewhat distorted country distribution that was skewed towards Anglo-Saxon countries, although this bias has decreased substantially in recent years. Likewise, researchers were included because they only endorsed labels that were more specific than the disciplines we used (e.g., cognitive behavioral therapy instead of clinical psychology ) or did not use any label at all. The generalizability of our findings is thus limited to the degree that our GS sample is representative of the scholars of the studied fields. Within these limits, GS represents a unique possibility to sample thousands of scholars who self-identify as contributing to certain topics and fields–and links relevant information. This could barely be obtained otherwise, although we also note that future AI methods might perhaps automatically classify researchers based on keywords contained in paper abstracts. We were able to verify that at least some of the editorial board members of mainstream personality journals indeed endorsed the corresponding discipline label in GS and also that a substantial percentage of papers of self-identified personality psychologists indeed were publishing in corresponding outlets. However, still a large number of (associate) editors did not show up in our selection of GS profiles. From our experiences with the editorial board members from personality psychology, 43 out of 81 editors who used GS labels could have been identified with a mix of 3 common keywords (“personality psychology”; “personality”; and “individual differences”). This partial success in increasing coverage might count as a “proof of principle”. Moreover, by comparing topic endorsement in terms of a vector correlation, we were able to provide a first estimation of the amount of bias resulting from keyword selection. Our reported Spearman rank-order correlation of r = .49 between two different keywords set suggests that keyword selection did introduce method variance but our decisions were likely still valid to some extent. However, more systematic research is clearly needed that identifies for each discipline whether it is possible to identify a core set of keywords to identify most of their adherents and use this set (instead of a single keyword) for sampling purposes. A second limitation is our selection of psychological disciplines. For example, we relied only on classic distinctions within Web of Science, supplemented with personality psychology because this is also widely seen as a core discipline. Another reason for adding personality psychology is that we are most familiar with this (relatively small) discipline, and this knowledge helped us to verify the anchor our analytic procedures and results. However, we encourage future researchers to also include additional disciplines, such as health psychology, forensic psychology, and music psychology. Furthermore, we were forced to ignore differences within disciplines. For example, within social psychology, some researchers are more focused on experimental methods, whereas others use more correlational methods . The relatively static nature of our method also did not allow us to study in detail the processes by which researchers come to identify with certain disciplines, including how scientists co-construct this identification in interaction with stakeholders, like other scientists and society at large (e.g., ). A final limitation is our reliance on GS, which uses relatively liberal search algorithms that might not always produce valid results. For example, the year of first publication was not always computed correctly by GS, often because the author in question had a relatively common name, which sometimes led GS to claim many publications that were not actually written by the author in question (but by someone with the same or a similar name). Recently, Tang et al. checked this issue for a random sample of 3,000 computer science profiles, and found that 90.5% of profiles did not contain a single publication that was falsely assigned, suggesting that the problem is relatively limited in scope (see also ). Still, while GS allows researchers to clean their profiles and exclude such extraneous publications, this is apparently not always done by researchers. Also, in some cases, we needed to rely on the “scholar” package’s estimate of the year of first publication, which was biased (pushed forward in time) in the case of highly established researchers. Fortunately, however, we could compute a valid indicator of first publication year by hand and established that this was associated substantially with the GS estimate, so the biasing influence seems to have been limited. Another issue with GS is that it is relatively unclear what processes (e.g., self-presentation strategies, decision rules) give rise to generating identification labels on GS. To make GS even more useful for bibliometric research, it would be helpful if it also adopted a more standardized system of label endorsement (not necessarily instead but in addition to the free format that is currently used). Finally, although the average individual impact across disciplines in GS converged with the journal impact factor in JCR (which is based on Web of Science), our results should be replicated with other bibliometric platforms, such as Web of Science or Scopus. This seems currently difficult to do because other platforms do not include information about researchers’ disciplinary affiliation, though there might be ways around this (e.g., automatically assigning researchers to a discipline if they publish a certain percentage of their papers in journals of any discipline). With increasing sophistication of search algorithms, it might be possible to directly compare results across platforms and discipline classification methods. Our first research question concerned the numerical strength of the various disciplines as well as their co-occurrence within individual researcher profiles. Our analysis produced the striking finding that identifications with the discipline of cognitive neuroscience has seen a dramatic rise over the past 40 years, whereas those with social psychology have seen a strong decline (though the discipline has seen a large increase in absolute number of GS profiles, due to the large rise in profiles overall). In part, the relative rise in the proportion of profiles that identify with cognitive neuroscience and relative decline in the proportion of identification social psychology appear to be both sides of the same coin: If a greater proportion of researchers start to identify with cognitive neuroscience, a smaller proportion is left for identification with other disciplines (excluding endorsement of multiple disciplines, of course). This clearly did not happen with all disciplines, however. For example, the proportion of researchers identifying with mathematical psychology in their GS profiles clearly increased between 1980 and 1995. A follow-up analysis indicated that the increasing proportion of profiles from countries without an Anglo-Saxon background explained the decreasing proportion of profiles identifying with social psychology. Specifically, after including a variable indicating the percentage of Anglo-Saxon profiles in each year in the regression analysis, the negative association between calendar year and the proportion of social psychological profiles was no longer significant and constituted a relatively small effect size (partial R 2 = 0.03). The rise of identifications with cognitive neuroscience was not necessarily surprising, as already Robins et al. had reported evidence for an increasing prominence of both cognitive psychology and, to a certain extent, also neuropsychology. However, Robins and colleagues did not identify social psychology as a separate discipline, instead focusing on the school of behavioral psychology, which differs in focus from social psychology. The decline of identifications with social psychology has not appeared in previous studies and was surprising. As indicated by the additional analysis mentioned above, this decline seems mostly related to the fact that social psychology is becoming less representative of global psychology. With more and more psychological researchers joining the global research community, this might produce a different composition of psychological science in the not-so-distant future. We also investigated differences in multidisciplinary focus across disciplines. From our scan, it emerged that only 5% of profiles endorsed more than one discipline. There were large differences between disciplines, however. Particularly the very large disciplines of cognitive neuroscience and social psychology were less frequently endorsed in combination with other disciplines. This might be seen as logical if there is more topical breadth with such large disciplines and thus perhaps less need to affiliate with other disciplines. That said, the relatively small discipline of psychoanalysis was also not characterized by many cross-disciplinary profiles, so other factors (like the degree of disciplinary identification, segregation of topics and methods) seem to also play a role here. The large degree of multidisciplinary identification in personality psychology (although less striking when using a more expansive keyword set) is consistent with the conclusion by Yang and Chiu that personality is a hub discipline with connections to and from many other disciplines (see also ). However, institutional factors might also have played a role. Consistent with journals such as the Journal of Personality and Social Psychology as well as combined societies such as the Society for Personality and Social Psychology , almost three quarters of multidisciplinary personality profiles also endorsed social psychology. The remaining multidisciplinary profiles, however, endorsed combinations with all other disciplines. Finally, it should be noted that while almost 70% of all multidisciplinary personality psychologists endorsed social psychology, the reverse was not true: Only less than 3% of multidisciplinary social psychologists endorsed personality psychology, which is consistent with a recent analysis that identified personality psychology as only marginally related to social psychology . As stated, only a relative minority of profiles endorsed multiple disciplines. This might be partly a result of a binary identification tendency: the idea that it is useful to firmly identify with only one discipline. This may be enforced by institutional structures, academic positions, job postings, and tenure committees that often seek (highly) specialized scholars who can represent a given field. Needless to say, however, binary classifications also have practical components given the way that psychology and academic practices are organized. After all, attending conferences and creating collaboration networks within a discipline takes time, and therefore it is much more difficult to repeat across multiple disciplines. All of this notwithstanding, we found that the percentage of multidisciplinary psychologists doubled across the study period, from ca. 2.5% to 5.0%. We can only speculate about the origins of this trend but think that three factors might play a role. First, the increase in multidisciplinarity might have intrinsic reasons. For example, researchers might be driven to study “psychology as a whole” because they really want to understand the wholeness of human functioning and perhaps realize over time that this is not possible within the confines of only one discipline. Second, there might be more recent institutional pressures towards multidisciplinarity, for example, in the tendency of large funding agencies to favor multidisciplinary work. Third, the research community itself seems to yearn more and more–especially in the wake of the replicability crisis or credibility revolution –for increased cross-talk, sharing of data, and cooperation, resulting in multidisciplinary consortia and coordinated laboratories or studies. Our second research question pertained to the topics that researchers endorsed on their profiles, in addition to the disciplines they identified with. An inspection of these topics indicated that the percentage of endorsement is likely an underestimation of actual research practices within a discipline. For example, only somewhat less than 7% of clinical psychologists endorsed psychotherapy , and only less than 7% of cognitive neuroscientists endorsed neuroimaging , even though these topics appear central to the disciplines in question. This likely reflects the degrees of freedom when creating a GS profile and the fact that the choice for certain labels might limit the perceived necessity to add additional terms (e.g., developmental psychologists apparently did not deem it necessary to include terms such as development or change ). As scholars can only publish 5 keywords, they need to take care in selecting keywords that are important to them, allow their easy identification (for themselves and others), and are not too redundant. Certain terms or concepts that overlap so strongly with a discipline and are implicitly contained in the discipline denomination are then likely omitted in most cases. There were large differences between disciplines in the relative frequencies of endorsing certain labels. Overall, only psychoanalysis, psychophysiology, and psychometrics featured topics that were endorsed by more than 10% of profiles ( psychotherapy , emotion , and statistics , respectively), whereas the relative endorsement in other disciplines was more diluted (e.g., only less than 5% of social psychologists endorsed social cognition ). Overall, it seemed that the larger disciplines (clinical psychology, cognitive neuroscience, and social psychology) had a somewhat stronger dilution of topics than many smaller disciplines, perhaps reflecting greater critical mass for further sub-discipline specialization. Surprisingly, especially developmental psychology seemed rather fragmented in terms of topic endorsement. Although speculative, this seems to reflect the combinatory power of the notion of development/change: Almost every psychological phenomenon changes with age/time, so a developmental psychologist can study an almost limitless array of topics. By comparison, other disciplines might be more constrained in their endorsements to certain key contents (e.g., a clinical psychologist might be more likely to study psychopathology ). By extracting all common topics across disciplines and counting relative endorsements of these topics per discipline, we also created “content vectors” for each discipline. By factor-analyzing these vectors, we established a novel method of mapping psychological research. Speaking to the face validity of our new method, our factor solution was reminiscent of the correlational versus experimental distinction that already Cronbach has identified and that was recently confirmed by Flis and van Eck using graphical mapping based on co-occurrence of terms in article abstracts. Based on topic endorsement frequencies, we indeed found a dimension dominated by cognitive neuroscience and experimental psychology, versus a second dimension dominated by clinical psychology, developmental psychology, personality psychology, psychophysiology, psychometrics, and social psychology. Of note, these dimensions differed from the results of Yang and Chiu, who found two dimensions: basic versus applied and population-specific versus population-general. It seemed that social psychology and psychophysiology, which used to have strong experimental traditions , are currently focusing on topics that are also studied by traditionally “correlational” disciplines, like developmental psychology. Although speculative, it might be that more experimentally minded researchers within social psychology and psychophysiology have been increasingly gravitating towards and identifying with the upcoming discipline of cognitive neuroscience–or that new researchers coming to GS self-identify with different labels. To identify shifts over time, longitudinal research is needed on researchers’ private and public self-identifications across their careers (to the best of our knowledge, this is currently not possible in GS as changes in keywords are not available to study). Additionally, the meaning of topics (e.g., emotion ) needs to be studies across time as meanings can change and carry different connotations. Finally, an overall inspection of topics across disciplines suggested that some topics were endorsed by more disciplines than others. Particularly three topics appeared in many top 10 lists: Personality , emotion , and health psychology . The relative prominence of personality as an overarching topic is perhaps not surprising based on earlier research that personality is an integrative topic studied across many disciplines . Of note, however, is that personality was not frequently endorsed by experimentally oriented psychologists, as defined above. As a matter of fact, only emotion appeared also in the top 10 list of the experimental disciplines. This topic therefore seems very promising for multidisciplinary approaches and interdisciplinary integration. Indeed, experimental researchers could study the effects of emotional states on other psychological processes, social psychologists could study the effects of emotions on social outcomes, clinical psychologists could study negative emotions such as shame and depressed affect, and so forth. As part of our third research question, we also compared psychological disciplines in terms of productivity and impact. Regarding productivity, we found that the typical (median) psychological researcher publishes 3–4 papers per year (which are indexed in GS). Some researchers, however, publish much more than this (e.g., 11% of all researchers published 10 papers or more), thus producing a skewed distribution that corresponds to a power law. Productivity differences between disciplines were also found, with cognitive neuroscience and developmental psychology being somewhat less productive than personality psychology (only using a narrow keyword selection) and psychometrics. This might reflect the greater necessary investment in sampling in the former disciplines, with fMRI experiments and longitudinal studies being quite time-consuming to set up. By comparison, in more “productive” fields it might be more common to include additional co-authors on papers, resulting in higher numbers of papers per year. We also compared the citation impact of the various disciplines both in terms of citation increases per year as well as in citation increases per year and publication unit (e.g., paper). Our results suggested relatively large differences between the disciplines. Results also converged with the impact statistics of the JCR, with one exception: Cognitive neuroscience did not obtain the strong citation impact that would have been predicted based on the average journal impact factors in that domain. This might not be that surprising, however, because many journals in the Neuroscience domain of Web of Science are rather medical and/or biological journals and thus from fields where impact factors tend to be higher. For cognitive neuroscientists publishing in these journals, however, research impact seems about similar to that of other psychologists, as, for example, suggested by the comparability of impact between the two largest disciplines of social psychology and cognitive neuroscience. Psychoanalysis emerged as the discipline with the lowest impact, and also experimental and psychophysiology appeared somewhat lower in impact. Regarding psychoanalysis, the relatively low impact might reflect the earlier finding by Robins et al. that this school of thought has gotten out of fashion. Consistent with this, the relative prominence of psychoanalysis visibly declined also in our analysis. Moreover, the median first year of publication for psychoanalytic profiles was markedly lower (1998) than for the other disciplines (range 2005–2007). This was not true for experimental psychology and psychophysiology, however. Because more recent technological advances, such as fMRI and neuroimaging, were less frequently endorsed by these disciplines (when compared to cognitive neuroscientists), it is possible that differences in infrastructure can explain differences in impact, but this remains speculative without further research. Using an expanded set of personality keywords, personality psychology but also cognitive neuroscience and multidisciplinary psychology emerged as particularly high in impact. In the introduction, we speculated that interdisciplinary focus and replicability of findings might contribute to impact. In line with this, both personality psychology and cognitive psychology have been highlighted as being especially robust . In terms of interdisciplinary focus, the high impact of personality psychology is reminiscent of earlier claims that personality psychology is a hub science that attracts citations from different areas. Consistent with this, we empirically established that also multidisciplinary psychology (defined as endorsing multiple disciplines on one’s GS profile) was associated with a relatively large citation impact. However, speaking against this speculation is the fact that a) the high impact of personality psychology was less evident when an expanded set of keywords was used, b) cognitive neuroscience also demonstrated strong citation impact (particularly when compared to the expanded set of personality keywords) but its members less often endorsed other disciplines, and c) experimental psychology demonstrated weaker citation impact although its members were more likely to endorse other disciplines. In theory, the idea that multi-disciplinary research has stronger impact makes sense: If a discipline produces findings that are relevant for many other disciplines, that discipline can accumulate more citations than a more “isolated” discipline. On a substantive level, personality psychology is concerned with various psychological variables within the “whole person” and might therefore be particularly suited to play a multi- and inter-disciplinary role. That said, it was striking that the topic of personality was not frequently endorsed by cognitive neuroscientists or experimental psychologists and therefore seems to primarily occupy a hub-position within correlational psychology. It is an interesting question whether there might be other, hitherto undiscovered hub-positions within experimental disciplines as well (e.g., focusing on whole-brain functioning or on interactions between psychological functions), but these were not discovered by the current analyses. It might also be the case that cognitive neuroscience itself can qualify as such a hub position within the experimental approach because it emerged as a rather strong marker of that domain in our factor analysis, as opposed to the more fragmented nature of the correlational approach. Moreover, cognitive neuroscience might be multi-disciplinary at a higher-order level, integrating knowledge from biology, medicine, engineering, and mathematics. One interesting, unexpected finding was that the variance in impact of personality researchers in was much higher than the variance of other disciplines. This partly reflected the discipline’s smaller size, because the pattern was not visible when a more expanded set of keywords was used (see S3 Fig in ). However, the same phenomenon did not occur for the discipline of psychoanalysis, which is almost equal in size. Rather, it is possible that in personality psychology there is a relatively larger likelihood of developing an exceptionally well-cited profile, when compared to other disciplines. In other words, whereas many personality psychologists appear to follow relatively average trajectories, a sizable minority deviated from this norm and were cited many times more often. Although speculative, perhaps this pattern is due to a combination of the status of personality psychology as a hub science and its relatively small size. This combination would make it easier for clear “topic leaders” to emerge, who are then cited widely not just within personality psychology but also in other disciplines. Our study had several strengths. To the best of our knowledge, we are the first to systematically scan the entire field of psychology without relying on a classification of journals. This is important because not all authors of psychology journals are psychologists, and conversely not all psychology researchers publish in psychology journals. Instead, we focused on disciplinary endorsement, which has the advantage of focusing attention on the content areas that psychological researchers themselves identify with. Using the entire scope of self-endorsed topic labels, we could therefore obtain a fuller picture of the different topics that are studied within psychology and also how they are combined. Also, a clear strength of our approach is that we included a relatively large and diverse sample of researcher profiles, which was leveraged by the fact that each profile included multiple data points per year. Using these rich data, we could compute novel impact statistics, such as citation increases per year while controlling for between-discipline differences in productivity. Finally, we employed a novel and potentially more precise index of comparative scientific impact that takes into account differences in researchers’ career stage and quantitative publication output, which differs between psychological disciplines as we have found through our results. That said, our approach also had clear limitations. Most obviously, we were limited to sampling profiles of researchers who a) created a GS profile in the first place, b) used labels to describe their research (this is not required by GS), c) used the labels that we identified as markers of each discipline, d) formulated these labels in English, and e) had more than 100 citations in GS. This clearly produced a somewhat distorted country distribution that was skewed towards Anglo-Saxon countries, although this bias has decreased substantially in recent years. Likewise, researchers were included because they only endorsed labels that were more specific than the disciplines we used (e.g., cognitive behavioral therapy instead of clinical psychology ) or did not use any label at all. The generalizability of our findings is thus limited to the degree that our GS sample is representative of the scholars of the studied fields. Within these limits, GS represents a unique possibility to sample thousands of scholars who self-identify as contributing to certain topics and fields–and links relevant information. This could barely be obtained otherwise, although we also note that future AI methods might perhaps automatically classify researchers based on keywords contained in paper abstracts. We were able to verify that at least some of the editorial board members of mainstream personality journals indeed endorsed the corresponding discipline label in GS and also that a substantial percentage of papers of self-identified personality psychologists indeed were publishing in corresponding outlets. However, still a large number of (associate) editors did not show up in our selection of GS profiles. From our experiences with the editorial board members from personality psychology, 43 out of 81 editors who used GS labels could have been identified with a mix of 3 common keywords (“personality psychology”; “personality”; and “individual differences”). This partial success in increasing coverage might count as a “proof of principle”. Moreover, by comparing topic endorsement in terms of a vector correlation, we were able to provide a first estimation of the amount of bias resulting from keyword selection. Our reported Spearman rank-order correlation of r = .49 between two different keywords set suggests that keyword selection did introduce method variance but our decisions were likely still valid to some extent. However, more systematic research is clearly needed that identifies for each discipline whether it is possible to identify a core set of keywords to identify most of their adherents and use this set (instead of a single keyword) for sampling purposes. A second limitation is our selection of psychological disciplines. For example, we relied only on classic distinctions within Web of Science, supplemented with personality psychology because this is also widely seen as a core discipline. Another reason for adding personality psychology is that we are most familiar with this (relatively small) discipline, and this knowledge helped us to verify the anchor our analytic procedures and results. However, we encourage future researchers to also include additional disciplines, such as health psychology, forensic psychology, and music psychology. Furthermore, we were forced to ignore differences within disciplines. For example, within social psychology, some researchers are more focused on experimental methods, whereas others use more correlational methods . The relatively static nature of our method also did not allow us to study in detail the processes by which researchers come to identify with certain disciplines, including how scientists co-construct this identification in interaction with stakeholders, like other scientists and society at large (e.g., ). A final limitation is our reliance on GS, which uses relatively liberal search algorithms that might not always produce valid results. For example, the year of first publication was not always computed correctly by GS, often because the author in question had a relatively common name, which sometimes led GS to claim many publications that were not actually written by the author in question (but by someone with the same or a similar name). Recently, Tang et al. checked this issue for a random sample of 3,000 computer science profiles, and found that 90.5% of profiles did not contain a single publication that was falsely assigned, suggesting that the problem is relatively limited in scope (see also ). Still, while GS allows researchers to clean their profiles and exclude such extraneous publications, this is apparently not always done by researchers. Also, in some cases, we needed to rely on the “scholar” package’s estimate of the year of first publication, which was biased (pushed forward in time) in the case of highly established researchers. Fortunately, however, we could compute a valid indicator of first publication year by hand and established that this was associated substantially with the GS estimate, so the biasing influence seems to have been limited. Another issue with GS is that it is relatively unclear what processes (e.g., self-presentation strategies, decision rules) give rise to generating identification labels on GS. To make GS even more useful for bibliometric research, it would be helpful if it also adopted a more standardized system of label endorsement (not necessarily instead but in addition to the free format that is currently used). Finally, although the average individual impact across disciplines in GS converged with the journal impact factor in JCR (which is based on Web of Science), our results should be replicated with other bibliometric platforms, such as Web of Science or Scopus. This seems currently difficult to do because other platforms do not include information about researchers’ disciplinary affiliation, though there might be ways around this (e.g., automatically assigning researchers to a discipline if they publish a certain percentage of their papers in journals of any discipline). With increasing sophistication of search algorithms, it might be possible to directly compare results across platforms and discipline classification methods. The current study conducted a comprehensive scan of psychological disciplines as represented in GS profiles. Our results indicated that cognitive neuroscience and social psychology have the largest number of self-identified GS profiles, but the relative composition of these profiles has shifted quite substantially from less social psychology to more cognitive neuroscience, possibly because of the latter’s more prominent role outside of the Anglo-Saxon countries that used to dominate GS profiles to a much larger degree. Multidisciplinary researchers appeared a tiny, albeit increasing minority, except for personality psychology where additionally endorsing other disciplines seemed the norm. In terms of topical coverage, scientific psychology appeared focused on a variety of research themes that varies quite substantially across disciplines. Consistent with earlier conceptual and empirical analyses, the broad dimensions of correlational and experimental psychology were found to underlie the pattern of topical endorsement across the various studies disciplines. Of all possible topics, currently emotions might be seen as a potential integrating force within psychology, as it featured prominently in the thematic lists of almost all disciplines as well in the profiles of researchers with a multidisciplinary focus. It might be very much worthwhile to pursue such interdisciplinary integration, as suggested by the example of personality psychology. Specifically, personality psychology seems to represent a discipline that integrates many perspectives from other disciplines and is therefore useful for many other applied and fundamental disciplines. Institutions that want to further such integration (as well as scientific impact) might therefore be advised to focus on topics such as emotion and personality. Such an approach might also help to stem the fragmentation of academic psychology, although progress towards unification seems also contingent on a more evenly distributed focus on topics across correlational versus experimental psychological traditions. S1 File (PDF) Click here for additional data file. |
Roles considered important for hospitalist and non-hospitalist generalist practice in Japan: a survey study | 064e9547-2e5d-4478-bcb1-1f39f3b2c251 | 10327327 | Internal Medicine[mh] | In the mid-1990s, an increased focus on quality and patient safety led to the evolution of the hospitalist specialty . Since its inception, the number of hospitalists has grown, with current estimates of at least 44,000 non-paediatric hospitalists in the United States of America (USA) . An interesting aspect of the rapid expansion of hospital medicine is the growth of the field beyond USA, including Asian Countries . While some of the reasons for the development of hospital medicine internationally are the same as in the United States, there are differences in training, healthcare systems, regulations, and cultural norms, underscoring the practice of hospital medicine in different countries. In Japan, the number of hospitalists is increasing; however, their role and importance is unclear as of 2022. This survey study investigated what hospitalists and non-hospitalist generalists in Japan consider important for the practice of their specialty. In Japan, post-graduate year (PGY)1 − 2 residents spend the first two years rotating through various specialties such as internal medicine, general medicine (similar to family practice in the United States), paediatrics, obstetrics and gynaecology, and emergency medicine after graduation from medical school . In PGY 3 − 5, residents pick an area of specialisation . In 2018, post-graduate training in Japan was modified to incorporate hospital medicine . There are now two pathways through which Japanese doctors can become hospitalists. PGY 3 − 5 residents specialising in either general internal medicine or general medicine can go on to become hospitalists. Specialists in general internal medicine, general medicine, and hospital medicine are all considered generalists (Appendix ). Until recently, the definition of general medicine in Japan has been ambiguous, and there has been no clear distinction between the role of a hospitalist and a non-hospitalist generalist . The historical lack of role clarity has influenced the practice of hospitalists in Japan, who, unlike many hospitalists in the U.S., do not work exclusively in hospitals. Instead, Japanese hospitalists take care of patients in multiple locations at hospitals (over 20 beds), outpatient clinics, emergency rooms, and ICUs. Non-hospitalist generalists are often described as family physicians. They often work in clinics with no beds or fewer than 20 beds. They specialise in treating different patients and problems from a broad perspective, not limited to a specific disease or age group. While emphasising family relationships, they must be able to comprehensively address health issues commonly encountered in the community regardless of age or disease, including preventive care, multimorbidity, and psychosocial issues . Most of the hospitalist and non-hospitalist generalists in Japan belong to one (or both) of two societies: The Japan Primary Care Association (JPCA) and the Japanese Society of Hospital General Medicine (JSHGM). In 2022, the JPCA became a professional organisation certifying family physicians, while JSHGM is the entity for certifying hospitalists . Membership in an academic society is a prerequisite for both specialties. As a professional and certification organisation for hospitalists, JSHGM has described the 10 most important roles for the Japanese hospitalist . These include a generalist mindset, leadership, management, comprehensive community care, collaboration with multiple professions, medical interviews, physical examinations, diagnostic Reasoning, and active educational and academic activities . These roles are akin to the 24 “core competencies” for hospitalists in the U.S. defined by the Society of Hospital Medicine (SHM) . Although these priorities have been defined by their professional organisations in Japan and the U.S., it is unclear what roles hospitalists themselves consider important in their practice. It is also unknown whether clinical priorities for hospitalists differ from those of non-hospitalist generalists. Clarity around practice is important in differentiating the professional differences between hospitalists and non-hospitalist generalists and will facilitate the continued training and development of hospitalists in Japan. Therefore, we performed this survey study to investigate what hospitalists and non-hospitalist generalists in Japan consider important for the practice of their specialty.
Setting and participants This study was an observational survey, based with questionnaires sent to all hospitalists and non-hospitalist generalists listed in the JPCA and JSHGM mailing lists. For this study, hospitalists were defined as general medicine physicians working in hospital wards or in a clinical practice affiliated with a medium to a large hospital. The response period was from January 28 to March 28, 2020. Residents were excluded from the study . We used the Research Electronic Data Capture to store data online . Survey instrument Subject matter experts (TM, SK, YK, KN, HN, and GD) met in a series of rapid cycle sessions to generate and revise questions for the survey. This content expertise was based on: (1) approximately 10 years of hospitalist practice in Japan, (2) experience working as a hospitalist in the United States (GD), and (3) involvement in creating competencies for hospitalists in Japan (TS and TN). The content experts first determined the possible roles of Japanese hospitalists based on the 10 items considered important by the JSHGM and the 24 items included in the core competencies for hospital medicine in health care systems for hospitalists in the U.S. as described by SHM . Twenty-six items were ultimately included in the survey. Respondents were asked to select the three items they thought were the most important roles for hospitalist practice, ranking them first to third. Responses were scored by applying 3 points to the first ranked item, 2 points to the second, 1 point to the third, and 0 points for “not applicable.” This scoring method was created based on the Borda count . The scores for each item were then divided by the overall total score and expressed as a percentage. The survey also included demographics questions (age and sex), post-graduate years at the time of the baseline survey, academic society memberships (JPCA and JSHGM), whether they belonged to a general medicine department in a school of medicine, practice setting (university hospital, community-based hospital, clinic), and hospital size. Institutions with 19 or fewer beds were referred to as clinics, and institutions with 20 or more beds were referred to as hospitals, according to the standards of the Ministry of Health, Labour and Welfare . A sub-analysis, comparing the scores, was performed for community-based and university hospitals. For this study, hospitals with 20 − 199 beds, 200 − 399 beds, and > 400 beds were categorised as small, medium-sized, and large hospitals, respectively. Respondents were categorised as residents, attendings, managers, and “other” (Table ). Doctors who attended a specialty program for three years (PGY3 − 5) were called residents. Staff members were called attendings, managers were called managers, and physicians who did not fit into any preceding category were called others. The survey was conducted in Japanese, and the English version of the questionnaire was published as an appendix (Appendix ). Data analysis Results are presented as medians (interquartile range) for continuous variables or prevalence for categorical variables. All calculations were performed using JMP PRO software, version 13.0 (SAS Institute, Cary, NC, USA). Variables were subjected to the chi-square test, and p values < 0.05 were considered statistically significant.
This study was an observational survey, based with questionnaires sent to all hospitalists and non-hospitalist generalists listed in the JPCA and JSHGM mailing lists. For this study, hospitalists were defined as general medicine physicians working in hospital wards or in a clinical practice affiliated with a medium to a large hospital. The response period was from January 28 to March 28, 2020. Residents were excluded from the study . We used the Research Electronic Data Capture to store data online .
Subject matter experts (TM, SK, YK, KN, HN, and GD) met in a series of rapid cycle sessions to generate and revise questions for the survey. This content expertise was based on: (1) approximately 10 years of hospitalist practice in Japan, (2) experience working as a hospitalist in the United States (GD), and (3) involvement in creating competencies for hospitalists in Japan (TS and TN). The content experts first determined the possible roles of Japanese hospitalists based on the 10 items considered important by the JSHGM and the 24 items included in the core competencies for hospital medicine in health care systems for hospitalists in the U.S. as described by SHM . Twenty-six items were ultimately included in the survey. Respondents were asked to select the three items they thought were the most important roles for hospitalist practice, ranking them first to third. Responses were scored by applying 3 points to the first ranked item, 2 points to the second, 1 point to the third, and 0 points for “not applicable.” This scoring method was created based on the Borda count . The scores for each item were then divided by the overall total score and expressed as a percentage. The survey also included demographics questions (age and sex), post-graduate years at the time of the baseline survey, academic society memberships (JPCA and JSHGM), whether they belonged to a general medicine department in a school of medicine, practice setting (university hospital, community-based hospital, clinic), and hospital size. Institutions with 19 or fewer beds were referred to as clinics, and institutions with 20 or more beds were referred to as hospitals, according to the standards of the Ministry of Health, Labour and Welfare . A sub-analysis, comparing the scores, was performed for community-based and university hospitals. For this study, hospitals with 20 − 199 beds, 200 − 399 beds, and > 400 beds were categorised as small, medium-sized, and large hospitals, respectively. Respondents were categorised as residents, attendings, managers, and “other” (Table ). Doctors who attended a specialty program for three years (PGY3 − 5) were called residents. Staff members were called attendings, managers were called managers, and physicians who did not fit into any preceding category were called others. The survey was conducted in Japanese, and the English version of the questionnaire was published as an appendix (Appendix ).
Results are presented as medians (interquartile range) for continuous variables or prevalence for categorical variables. All calculations were performed using JMP PRO software, version 13.0 (SAS Institute, Cary, NC, USA). Variables were subjected to the chi-square test, and p values < 0.05 were considered statistically significant.
The number of physicians with known affiliations on these two mailing lists was 3,466 for JPCA and 1,766 for JSGHM. Several participants were members of both associations but only responded to the questionnaire once. There were 1,367 unique respondents, and the overall response rate was 26.1%. Two hundred eighty-eight incomplete surveys were excluded from the analysis. Nine respondents did not meet the inclusion criteria (eight were residents, and one was not a physician). A total of 733 hospitalists and 238 non-hospitalist generalists were included in the study (Fig. ). In the hospitalist group, 85.2% were men, and the median age was 46 years. Further, 41.2% belonged to both JSHGM and JPCA; 72.6% of the respondents worked in community-based hospitals, 45.1% worked in large hospitals, and 53.5% were attendings. There were no significant differences between the hospitalist and non-hospitalist groups regarding age and post-graduate year of practice (Table ). Both hospitalists and non-hospitalists most frequently ranked evidence-based medicine among the three most important roles. However, hospitalists ranked diagnostics and inpatient medical management as the second and third among the three most important roles, while non-hospitalists ranked inpatient medical management and elderly care as the second and third among the three most important roles. While there were similarities in what the generalists ranked as important, the roles least often identified as among the most important differed in the two groups (Fig. ) (Appendix ). Interestingly, in a comparison of community-based hospitals and university hospitals, hospitalists at community-based hospitals placed the highest importance on inpatient medical management (13.2% (6.2% in the university hospital group)), while hospitalists at university hospitals placed the highest importance on diagnostic reasoning (15.2% (10.3% in the community-based hospital group)) (Appendix ). Regarding gender comparison, males rendered the most importance to evidence-based medicine (13.0% [11.5% in the female group]), while females gave the most importance to inpatient medical management (14.8% [10.7% in the male group]) (Appendix ). According to position, residents and attending respondents gave the highest importance to evidence-based medicine (resident 17.1%, attending 13.2%), while managers accorded the highest importance to diagnostic reasoning (11.5%) (Appendix ). In comparison by academic society, those belonging to only JPCA regarded inpatient medical management as the most important (14.1%), while those belonging to only JSHGM regarded evidence-based medicine as the most important (15.5%). Those who belonged to both societies considered diagnostic reasoning the most important (13.0%) (Appendix ).
This study is the first to investigate what roles Japanese hospitalists consider important for their practice. Japanese and non-hospitalist generalists ranked evidence-based medicine, diagnostic reasoning, and inpatient medical management as the most important roles for their practice. The relative importance of evidence-based medicine, diagnostic reasoning, and inpatient medical management for both hospitalists and non-hospitalist generalists is congruent with clinical practice in Japan. In addition, in a study of Japanese general medicine doctors, clinical care was the most important among the four categories of clinical care, education, research, and management . This suggests that both hospitalists and non-hospitalist generalists may also emphasise on clinical care. A comparison of community-based hospitals and university hospitals among hospitalists showed that community-based hospitals placed more importance on inpatient medical management. In comparison, university hospitals placed more importance on diagnostic reasoning. This may reflect the large number of hospitalists in this study who belonged to community-based hospitals. This may also be influenced by previous reports that university hospitals tend to receive consults for difficult-to-diagnose cases . Moreover, there is a long-standing history of training diagnostic thinking in Japan. This is accomplished through collective intelligence and non-profit conferences on diagnostic reasoning frequently held in Japan . In 2019, JSGHM created a working group on diagnostic excellence to evolve the area of diagnostic medicine in Japan, and this group has been very active to date . Within JSGHM conferences, the group reports the largest number of cases and conducts workshops on diagnostic medicine. Outside of the JSGHM, the group has produced the most research in Japan in this field. Hospitalists prioritise the quality and safety of medical care over non-hospitalist generalists. Studies have shown that the presence of hospitalists in Japan has shortened hospital stays and reduced costs for common diseases such as pneumonia and heart failure . Similarly, a report summarising the role of hospitalists in four Asian countries also emphasises the importance of their role in addressing healthcare quality . The results of these previous studies indicate that medical care quality and safety are core interest for Japanese hospitalists and that their work improves patient care in this area. Our study also shows that non-hospitalist generalists in Japan place more importance on elderly care compared to that by hospitalists. When the frequency of inpatient medical management and the frequency of elderly care are added together, there may not be a large difference. Still, the care of the elderly is important to hospitalists. Japan is an aging society, with 25% of the population over 65 years old as of 2013, and this figure is expected to reach 40% by 2060 . Elderly people are more likely to have multimorbidity, and 62.8% of those aged 65 years and older present with multiple co-morbidities . Korean hospitalists reportedly contribute to shorter hospital stays for this patient population . In addition, a previous report recommends that hospitalists in Europe, where the proportion of elderly patients is smaller than in Japan, need to pay more attention to diseases related to the elderly . We believe that the care of the elderly is a crucial area of development for hospitalists in the future. In order to distinguish between hospitalists and non-hospitalist generalists, JSHGM believes that the original criteria could be refined to better define the role of the hospitalist . This study has several limitations. First, the response rate was low at 26.1%. However, our survey was sent to all non-unique members of JPCA and JSHGM; therefore, our response rate is underestimated. In addition, a large number of respondents dropped out of the survey, 288, of which 281 were nearly non-responsive, which is a very large number. This may be because the survey was an online survey that could be answered from a cell phone, and therefore many respondents may have been interrupted by other tasks or may have opened the survey once and answered later. Moreover, those who respond to a research-based survey like this may have a high affinity for research among general medical doctors. However, this study did not examine the respondents' backgrounds; thus, further research is needed. Second, we only surveyed Japanese hospitalists and non-hospitalist generalists who are members of JPCA and JSHGM; hence, there is a high possibility of response bias. Further, their perspectives do not necessarily represent those of the broader population of Japanese generalists. Third, the evaluation is based on a ranking system. As a result, the interval between the first, second, and third places in the chosen ranking is not necessarily the same. Fourth, the survey items were slightly skewed, and the fact that some items were detailed, and others were not may have affected the results of this survey. Fifth, participants in this study differed by gender, position, and academic affiliation, and this bias may have affected the results. Sixth, this survey was conducted exactly when the COVID-19 epidemic began to spread in Japan. Perhaps, this timing may have influenced the responses, as the results of the questionnaire also focused on the increasing intensity of public health messages and the increasing clinical evidence on COVID-19; however, this is one of the only viable ways to characterise informants’ perspectives. Despite these limitations, this study is the first to evaluate what Japanese hospitalists consider important roles in their practice. This study could be an important contribution highlighting key areas for the professional development of the hospital medicine profession in Japan and globally.
Japanese hospitalists practice in multiple settings, and there is some ambiguity around the uniqueness of their role compared to non-hospitalist generalists. While JSHGM has defined some core competencies for their practice, it is not yet known what Japanese hospitalists consider important for their practice. These findings call attention to current gaps in the top three priorities of hospitalists. They may be useful in investigating whether and how hospitalist priorities differ from core competencies and other requirements that determine whether training and practice activities should be renewed. Further research is needed to look at comparisons in practice and attitudinal similarities and differences in what hospitalists around the world consider important roles in their practice. Opportunities for enhancing hospitalist practice in the areas identified as important for their practice would also be a significant next step. This work would be crucial in fostering collaboration in training and practice development to build a strong specialty that leverages the unique characteristics of the international hospitalist community.
Additional file 1. Additional file 2: Appendix 2. Questionnaire raw data English Ver. Additional file 3: Appendix 3. Questionnaire resultwhat you consider important as a hospitalist? Additional file 4: Appendix 4. Compare community-based hospital with university hospital. Additional file 5: Appendix 5. Compare male with female. Additional file 6: Appendix 6. Compare resident of specialty training program, attending with manager. Additional file 7: Appendix 7. Comparison of Academic Affiliations.
|
Comparison of Induced Fields in Virtual Human and Rat Heads by Transcranial Magnetic Stimulation | babc316f-836a-472f-896e-b5d6a5b3b18f | 6330837 | Physiology[mh] | Transcranial magnetic stimulation (TMS) was introduced as both a method of noninvasive brain stimulation and a neurophysiological probe. It is applied by holding an electromagnetic coil, which is either a circular shaped coil or a figure-of-eight shaped coil on the scalp. Rapidly alternating magnetic fields produced by the coil enter the brain and induce electrical current, which leads to neuronal depolarization. As a noninvasive method to stimulate the brain, TMS has attracted considerable interest as an important tool for studying the functional organization of the human brain as well as a therapeutic tool to treat many psychiatric disorders and neurological conditions, including depression , schizophrenia , obsessive-compulsive disorder , posttraumatic stress disorder , Parkinson's disease , dystonia , tinnitus , epilepsy , and stroke . Although extensive researches have been done on TMS in the past two decades, no clear-cut conclusion has been reached on the underlying cellular and molecular mechanisms as well as the therapeutic mechanisms used in clinical practice. Animal models are helpful in elucidating some mechanisms of TMS as we are allowed to carry out invasive studies of molecular and genetic changes which are ethically not possible to be done on human beings. Recently, several experiments have shown that TMS has the ability to mediate neuroplasticity by enhancing the expressions of glutamate neurotransmitters in the rat brain . TMS not only activates some brain regions, but also increases the expression level of gene expression signals in the rat . Also, animal models of TMS play significant roles in understanding TMS-induced plasticity mechanisms as they can offer a more direct way to measure TMS-induced synaptic and nonsynaptic plasticity and to promote the neural repair . One of the major limitations to animal models of TMS is the lack of animal-specific stimulation coils. For example, most rat TMS studies use commercial human coils that are larger than the rat brain . It is necessary to develop a small animal coil such as for the rat. Recently, a mouse coil was produced that offers increased magnetic field and reduced heating . The purpose of this paper is to develop a TMS coil for the rat model with specific dimensions. We compare the induced electric fields in both realistic human and rat head models using the conventional Fo8 coil for the first time. The rat TMS coil is designed by downscaling the size of the conventional human TMS coil as well as reducing the injected current. It was found that the designed Fo8 coil can be applied to rat TMS with improved focality while also keeping high stimulation intensities.
The realistic rat model was obtained from Brooks Air Force Laboratory (BAFL), USA. There are 36 different tissues in the rat model with the dimensions of 126 mm, 240 mm, and 54 mm along the x, y, and z directions, respectively. The rat model is composed of 6.94 million cubic voxels with a resolution of 0.5 mm x 1 mm x 0.5 mm. shows the rat model with transparency of both the brain and nerve. shows a typical head slice in the coronal plane which contains the rat brain. And shows the brain slice with gray matter and CSF. The realistic human head model as shown in was obtained from a 34-year-old man model developed by the Virtual Family project . The man model was segmented in 77 tissues of which 36 tissues are involved in the present head model. The head model is composed of 10.47 million cubic voxels with a resolution of 1 mm x 1 mm x 1 mm. Some important brain subregions, such as the thalamus, hippocampus, pons, and pineal body, were included in the model.
The figure-of-eight coil used for human brain stimulation is shown in . The inner and outer radii of the circular wings are 10 mm and 35 mm, respectively. We applied the current with the magnitude of I=7.7 kA and working frequency of f = 3.6kHz in TMS coil. The same coil was also placed in the anterior position between the ears in the rat model, and it is shown in . The same stimulation parameters were presented in the coil for rat stimulation for comparison with the human model.
The time variation of the applied magnetic field causes induced currents in the rat tissues through Faraday's induction mechanism. We calculated the magnetic flux density (B-field) and induced electric field (E-field) in both the human and rat models by employing the impedance method . In this method, the models are described using a uniform 3D Cartesian grid and are composed of small cubic voxels. There are 10.47 million voxels for the human head and 6.94 million voxels for the rat in the computational space. Assuming that, in each voxel, the electric conductivity values are isotropic and constant in all directions, then the model can be represented as a 3D network of impedances. The impedances in various directions can be expressed as (1) Z m i , j , k = Δ m Δ n Δ p σ m i , j , k , where i , j , k indicate the voxel indexes; m is the direction of x , y , or z for which impedance is calculated and σ m i , j , k is the electrical conductivity for the voxel in the m -th direction. Δ m , Δ n , and Δ p are the sizes of the voxels in the m , n , p directions. Kirchhoff's voltage law applied to each loop in this network generates a system of equations for the loop currents. The net currents within the models are calculated from these loop currents, and the electric field is in turn calculated by using Ohm's law. The electrical properties, obtained from BAFL, are modeled using the Four-Cole-Cole method . In this method, the complex permittivity ϵ c of biological tissue subjected to the electric field with angular frequency ω is modeled by the relaxation theory and can be expressed as follows: (2) ε c ω = ε ∞ + ∑ r = 1 4 Δ ε r 1 + j ω / 2 π τ r α r + σ I j ω ε 0 , where ε ∞ is the permittivity in the high frequency limit, σ I is the conductivity, τ r is the relaxation time in the dispersion region r , and Δ ϵ r is the drop in permittivity in the frequency range of which the time period 2 π / ω is either much smaller or larger compared with the relaxation time. These parameters are obtained by fitting to the experimental measurements [ – ]. With appropriate parameter values for each tissue, the above equation can be used to predict the frequency dependence of the dielectric properties. After calculating ε c , the conductivity σ of each tissue can be calculated as (3) σ ω = − Im ε c ω ω ε 0 . The tissue conductivity values used in this paper are presented in .
shows the magnetic field distributions (B-field) in the coronal slice of y=120 mm of the human head model and y=52 mm of the rat model, respectively. In order to show the field distribution in head tissues, the contour outlines of skin and gray matter (GM) were also included in each figure. shows the rat brain and skin in the same slice separately. By comparing Figures and , one can clearly find how the magnetic field is distributed in the rat brain. The big difference can be observed when comparing the B-field in the human brain with that in the rat brain. We can find that the B-field in the human brain is smaller than that in the scalp and skull, where the B-field is larger than 1.2 T and is represented by red color ( ). For the rat model, however, the B-field with amplitude (> 1.2 T) has been distributed in almost the whole rat brain ( ), which means the conventional TMS for human brain stimulation is too strong for rat TMS. shows the induced electric field distribution (E-field) in the coronal slice of y=120 mm of the human head and y=52 mm of the rat model, respectively. In order to show the results dynamically, the color scale covers the range of 0-100 V/m, and all values above 100 V/m, i.e., the neuron excitation threshold , are shown in dark red. Again, the big difference can be observed when comparing the E-field in the human brain and with that in the rat brain. It can be observed in that the E-field is mainly distributed on the GM surface in several limited areas for the human brain, which suggests that the Fo8 coil produces a focal stimulation. As for the rat ( ), we can observe that almost the whole brain is potentially excited. A quantitative comparison of the brain volumes with an E-field larger than 100 V/m for both rat and human stimulations is shown in . It clearly shows that only 3% of the human brain is potentially stimulated, while this value is 69.8% for the rat. From the results shown above, we can conclude that the conventional TMS coil used for human brain stimulation is too strong for rat brain stimulation. In order to find the coil and stimulation parameters for rat TMS, we investigated the dependence of excited brain volume (E-field in brain tissues with value larger than 100 V/m) on stimulation parameters. Based on the human TMS described in the previous section, we changed the coil currents, coil outer radii, and the number of coil turns, respectively, and calculated the percentage of potentially excited brain volume to the whole volume. The obtained results are shown in . It can be observed that the outer radii of the coil have less impact on reducing the excited brain volume ( ). However, when decreasing either the injected coil currents or the number of coil turns, the excited brain volume will be significantly reduced (Figures and ). Based on these results, we designed a new figure-of-eight coil specifically for rat brain stimulation with improved focality. The coil parameters are as follows: outer and inner radii for each wing are 20 mm and 10 mm, respectively, the number of wire turn is 5 for each wing, and the injected current is 4.0 kA. shows the outline of this designed Fo8 coil for rat TMS. For the purpose of comparison, the original Fo8 coil for human TMS is shown in . It is observed that the newly designed coil is significantly reduced in size. presents the comparison of excited rat brain volumes using the conventional Fo8 coil with the newly designed coil. It can be observed that only 3% of the rat brain is excited, while this coil has little effect on human brain stimulation. shows the distribution of the B-field and E-field in the coronal slice of y=52 mm for the rat model by employing the newly designed coil. By comparing the B-fields between Figures and and by comparing the E-fields between Figures and , it can be observed that the focality of both the B-field and E-field in the rat brain is improved significantly. The distribution of the B-field and E-field in the coronal slice of y=120 mm for the human head by employing the new coil was presented in for comparison. It is obvious that both the magnetic field and the electric field in the human brain are very small, which are only distributed in the scalp of the human head model.
This paper firstly presents the comparison of standard Fo8 TMS between human and rat models by employing the impedance method. The distributions of both the B-field and E-field in virtual human and rat brains are presented. The results show that it is not possible to stimulate small rat brain regions selectively with a standard Fo8 TMS coil. A new rat-specific Fo8 coil with different coil parameters is designed by downscaling the coil size and changing the stimulation parameters. The results show that only 3% of the rat brain will be potentially excited. The new coil design will provide a new tool for small animal stimulation with improved focality. And the method in this paper allows designing more suitable coils for use in biological models.
|
Comparison of methods for detecting mandibular lingula and can antilingula be used in lingula mandibula detection? | 9571105b-833b-440f-a7b5-c278d4702fe0 | 11938737 | Musculoskeletal System[mh] | In routine dental practices, particularly during procedures such as inferior alveolar nerve (IAN) block anaesthesia and orthognathic surgeries, including bilateral sagittal split ramus osteotomy (BSSO) and inferior vertical ramus osteotomy (IVRO), understanding the anatomical positions of the lingula mandibula (LM) and mandibular foramen (MF) is essential for preserving this neurovascular bundle. The IAN, the thickest branch of cranial nerve (CN) V3, enters the mandible through the MF, which is located on its medial surface. The lingula is an irregularly shaped bony prominence on the medial aspect of the mandibular ramus near the MF . BSSO is the most commonly used method in mandibular orthognathic surgeries. The osteotomy should be performed horizontally, just above the LM located on the medial surface of the mandible. The most frequent complications of BSSO include IAN injury and improper fractures of the mandible. Permanent damage to the IAN is the complication that most significantly impacts a patient’s daily life [ – ]. The primary cause of this complication is performing the osteotomy line below the recommended level. Osteotomies conducted below this level can result in bleeding and neurological complications due to damage to the alveolar neurovascular bundle . IVRO originally performed using an extraoral approach, has been conducted intraorally for over 30 years following the introduction of electric oscillating saws . Despite being an older technique, IVRO is still widely used to treat mandibular prognathism . It has been reported that IVRO procedures have a lower incidence of permanent neurosensory disturbances than BSSO procedures . Additionally, IVRO has a shorter operative time compared to BSSO . However, since this technique is performed on the lateral surface of the mandible, the medial structures, including the IAN, LM, and MF, cannot be directly visualized, presenting a disadvantage . Therefore, an anatomical reference point on the lateral surface of the ramus was identified, specifically the most prominent bony point below the sigmoid notch (SN), which is referred to as the antilingula (AL) . This study aims to determine the correlation between reference points used in different techniques during orthognathic surgery and to minimize the risks of iatrogenic neurovascular damage.
This study was conducted retrospectively by reviewing the archive records of cone-beam computed tomography (CBCT) images involving the entire mandible, obtained for various reasons (resorption, trauma, impacted teeth, etc.) from patients who visited the Department of Oral and Maxillofacial Radiology at Recep Tayyip Erdoğan University Faculty of Dentistry for examination between January 2018 and September 2023. Institutional research ethics approval was obtained from the Recep Tayyip Erdoğan University Non-invasive Clinical Research Ethics Committee (serial number: E-40465587-050.01.04-1227). The study adhered rigorously to the principles outlined in the Declaration of Helsinki throughout all stages of the research process. Informed consent forms were obtained from all patients. The age range of the individuals included in the study was determined to be between 18 and 80 years. Exclusion Criteria Individuals with a history of intraosseous pathology (cyst/tumour) and/or syndromes (e.g., cleft palate or syndromes causing maxillary defects). CBCT images that do not possess adequate diagnostic quality. Cases with a history of surgical or orthodontic treatment. CBCT images obtained using the NewTom VGI Evo (Cefla, Verona, Italy) device were evaluated using the image processing software Planmeca Romexis 4.6.2.R (Planmeca Romexis, Helsinki, Finland), and measurements were also performed using this software. CBCT imaging was performed with the following parameters: field of view (FOV) of 16 × 16 cm, tube voltage of 110 kVp, tube current of 3.00–3.85 mA, exposure time of 1.8 s. During the scanning process, the patient’s age, gender, and the presence or absence of teeth in the evaluated region were recorded. Standardization in the evaluations has been ensured by aligning the sagittal plane perpendicular to the ground plane and the Frankfurt horizontal plane parallel to the ground plane. The right and left mandibular ramus were assessed separately on 3D images obtained from multiplanar reconstruction views. The lingula mandibula (LM) type was classified similarly to the study by Tuli et al. as nodular (lingula was nodular and of variable size, almost the entire lingula except for its apex which was merged into the ramus), truncated (lingula with somewhat quadrangular bony projection on its top), triangular with a wide base and a narrow rounded or pointed apex, rounded or pointed apex), or assimilated (Fig. ). The assimilated type indicates the absence of the LM. 2. The distances of the LM and mandibular foramen (MF) to the sigmoid notch (SN), anterior ramus (AR), posterior ramus (PR), and gonion (Go) were measured using the methods described by Sinanoğlu et al. and Findik et al. . Additionally, the vertical and horizontal distances between the MF and LM were measured as outlined in Sinanoğlu et al. study (Fig. ). 3. Horizontal and vertical measurements from the midpoint between the coronoid process and gonion (MCG) to the LM, as well as from the midpoint of the mandibular ramus (MW) to the LM, conducted similarly to the methods used by Apinhasmit et al. (Fig. ). 4. If antilingula (AL) is present, the distances of the AL to the SN, AR, PR, and Go measured following the methodologies of Chen et al. and Findik et al. (Fig. ). Evaluations on the Lateral Surface of the Mandible The AL type was classified similarly to the study by Chen et al. as hill, ridge, plateau, or plain. A ‘hill’ that is observed to be higher than the surrounding area, a ‘ridge’ that has a narrow and raised part, and a ‘plateau’ a large, flat area that is higher than around it was evaluated as. An AL described as “plain” indicates the absence of antilingula (Fig. ). 2. In the presence of both AL and LM, vertical and horizontal measurements between the two performed similarly to the study conducted by Sinanoğlu et al. (Fig. ). Statistical analysis The sample size was calculated using G*Power 3.1 software (Heinrich-Heine University of Dusseldorf, Germany). A post hoc power analysis was conducted using a one-way analysis of variance (ANOVA) with a 95% confidence level (1-α), effect size (d) = 0.3877551, number of groups = 4, and a sample size of 120. The power was calculated as 95.0% test strength (1-β) . Descriptive statistics were calculated and presented as mean and standard deviation (SD). The normality of data distribution was assessed using the Kolmogorov-Smirnov test. One-way ANOVA was used to determine the differences among groups in the parametric data obtained. The homogeneity of variance was evaluated using Levene’s test, and the post-hoc Tukey (Tukey HSD) test was applied for paired comparisons. For non-parametric data, the Kruskal-Wallis test was used. A p -value of < 0.05 was considered statistically significant. A radiolog (T.E.K.) with 10 years of expertise performed all measurements. After 1 month, the same examiner (T.E.K.) re-analyzed 60 randomly chosen ramus to assess measurement errors.
Individuals with a history of intraosseous pathology (cyst/tumour) and/or syndromes (e.g., cleft palate or syndromes causing maxillary defects). CBCT images that do not possess adequate diagnostic quality. Cases with a history of surgical or orthodontic treatment. CBCT images obtained using the NewTom VGI Evo (Cefla, Verona, Italy) device were evaluated using the image processing software Planmeca Romexis 4.6.2.R (Planmeca Romexis, Helsinki, Finland), and measurements were also performed using this software. CBCT imaging was performed with the following parameters: field of view (FOV) of 16 × 16 cm, tube voltage of 110 kVp, tube current of 3.00–3.85 mA, exposure time of 1.8 s. During the scanning process, the patient’s age, gender, and the presence or absence of teeth in the evaluated region were recorded. Standardization in the evaluations has been ensured by aligning the sagittal plane perpendicular to the ground plane and the Frankfurt horizontal plane parallel to the ground plane. The right and left mandibular ramus were assessed separately on 3D images obtained from multiplanar reconstruction views. The lingula mandibula (LM) type was classified similarly to the study by Tuli et al. as nodular (lingula was nodular and of variable size, almost the entire lingula except for its apex which was merged into the ramus), truncated (lingula with somewhat quadrangular bony projection on its top), triangular with a wide base and a narrow rounded or pointed apex, rounded or pointed apex), or assimilated (Fig. ). The assimilated type indicates the absence of the LM. 2. The distances of the LM and mandibular foramen (MF) to the sigmoid notch (SN), anterior ramus (AR), posterior ramus (PR), and gonion (Go) were measured using the methods described by Sinanoğlu et al. and Findik et al. . Additionally, the vertical and horizontal distances between the MF and LM were measured as outlined in Sinanoğlu et al. study (Fig. ). 3. Horizontal and vertical measurements from the midpoint between the coronoid process and gonion (MCG) to the LM, as well as from the midpoint of the mandibular ramus (MW) to the LM, conducted similarly to the methods used by Apinhasmit et al. (Fig. ). 4. If antilingula (AL) is present, the distances of the AL to the SN, AR, PR, and Go measured following the methodologies of Chen et al. and Findik et al. (Fig. ).
The AL type was classified similarly to the study by Chen et al. as hill, ridge, plateau, or plain. A ‘hill’ that is observed to be higher than the surrounding area, a ‘ridge’ that has a narrow and raised part, and a ‘plateau’ a large, flat area that is higher than around it was evaluated as. An AL described as “plain” indicates the absence of antilingula (Fig. ). 2. In the presence of both AL and LM, vertical and horizontal measurements between the two performed similarly to the study conducted by Sinanoğlu et al. (Fig. ).
The sample size was calculated using G*Power 3.1 software (Heinrich-Heine University of Dusseldorf, Germany). A post hoc power analysis was conducted using a one-way analysis of variance (ANOVA) with a 95% confidence level (1-α), effect size (d) = 0.3877551, number of groups = 4, and a sample size of 120. The power was calculated as 95.0% test strength (1-β) . Descriptive statistics were calculated and presented as mean and standard deviation (SD). The normality of data distribution was assessed using the Kolmogorov-Smirnov test. One-way ANOVA was used to determine the differences among groups in the parametric data obtained. The homogeneity of variance was evaluated using Levene’s test, and the post-hoc Tukey (Tukey HSD) test was applied for paired comparisons. For non-parametric data, the Kruskal-Wallis test was used. A p -value of < 0.05 was considered statistically significant. A radiolog (T.E.K.) with 10 years of expertise performed all measurements. After 1 month, the same examiner (T.E.K.) re-analyzed 60 randomly chosen ramus to assess measurement errors.
The intraobserver agreement was estimated using the intra-class correlation coefficient (ICC) and was found to be excellent for all measurements (ICC ≥ 0.997). This study was conducted on 120 patients, encompassing a total of 240 hemimandibles. The participant group consisted of 55.83% females and 44.17% males. In terms of age distribution, the mean age of the participants was calculated as 46.78 ± 15.30 years. When the classification was made according to lingula mandible types, nodular type LM was detected as 48.75% (117/240), truncated type LM as 16.25% (39/240), triangular type as 10.42% (25/240), and assimilated type as 24.59% (59/240). Table shows the positions of the lingula according to the types of antilingula. A significant difference was observed in the LM-SN distance. According to the table, patients with hill, ridge and plateau types antilingula had their lingula positioned more superiorly. A significant difference was also found in the distance between the lingula and the anterior ramus across antilingula types. This difference was observed to be more posterior in patients with plateau type antilingula. No significant difference was found between the different antilingula types (Table ). The vertical positions of the lingula in each antilingula type are shown in Table . According to these results, in all antilingula types, the lingula was located more inferiorly compared to the antilingula. The horizontal positions of the lingula in relation to the antilingula in each type are presented in Table . According to these findings, in hill and ridge type antilingula, the lingula was positioned more posteriorly relative to the antilingula, while in plateau type antilingula, the lingula was positioned more anteriorly. Antilingula was not detected in 26.25% of the mandibular rami, highlighting its absence in a significant portion of the study population. This finding reinforces the necessity of additional anatomical landmarks when planning osteotomy procedures.
In this study, where the position of the LM was investigated according to AL types, it was concluded that the position of the LM can be partially predicted based on the AL type. Specifically, in hill and ridge type ALs, the LM is positioned more inferior-posterior, whereas in plateau type AL, it is positioned more infero-anterior. However, no significant change in the position of the AL was found according to its type. Aging is known to influence skeletal morphology, including mandibular structures. However, previous study suggest that the most notable age-related change in mandibular morphology is a reduction in symphysis height, while other parameters remain relatively stable . In our study, despite the higher mean age of the participants, the mandibular morphology is still relevant for evaluation, as the structural integrity of the mandible remains largely preserved with age. This supports the validity of our study population and findings. In previous studies, including the classification by Chen et al. , AL was categorized into four types: hill, ridge, plateau, and plain. However, in our study, we refined this classification by introducing subcategories of hill, ridge, and plateau type, while excluding plain type. The plain category AL contained cases that did not fit into any specific classification, potentially leading to inconsistencies. By eliminating this category, we aimed to create a more systematic and precise classification framework, allowing for a clearer interpretation of AL variations. The MF is an opening on the medial surface of the mandibular ramus where the inferior alveolar neurovascular bundle enters the mandible. The edge of the MF typically has a “V” shape. The LM is a small bony prominence located just above the MF opening and is often used as an anatomical landmark for the IAN block. For a predictably successful IAN block, the needle tip must approach the MF and be at least 5 mm above the LM . Park et al. reported that the LM is located 0.8 mm anterior and 7.5 mm superior to the MF. However, other studies have indicated that the LM is positioned 7–7.8 mm superior to the MF . In this study, the vertical distance between the LM and MF, based on AL types, was found to be 8.6 mm, 9.2 mm, 11.1 mm, and 8.0 mm, respectively. The horizontal distance between the LM and MF, according to AL types, was 1.7 mm, 2.0 mm, 1.4 mm, and 1.6 mm, respectively. Therefore, in all AL types, the LM is positioned superior and anterior to the MF. Another situation where the MF and LM are used as anatomical landmarks is during BSSO in orthognathic surgery. BSSO is one of the most frequently performed surgical procedures today, and the risk of IAN damage is a known complication . The MF and LM are critical anatomical landmarks for medial surface horizontal osteotomies during BSSO. Numerous reports emphasize that the medial horizontal osteotomy should be performed just above the LM and extended as far back as possible from its posterior edge. However, identifying the LM can be challenging due to limited surgical visualization, muscle-tendon attachments in this area, and morphological variations . The AL, which serves as the counterpart to the LM, is a small bony prominence located on the lateral side of the ramus. This prominence was identified by Aziz et al. as an anatomical landmark to help prevent IAN damage due to the difficulty in accessing and identifying the LM from the buccal side of the ramus during IVRO. Reitzik et al. proposed that the AL serves as the attachment site for the masseter muscle, describing this prominence as the masseteric apical protrusion. The way the masseter muscle attaches to the mandibular ramus and its strength can influence both the formation of the AL and the size of the protruding area. Therefore, the AL may not always be detectable. An anatomical study examining the presence and position of the AL was conducted by Yates et al. . This study, involving three researchers, analyzed 70 dried mandibles and found that the AL was present in only 44% of cases, with 15% showing complete absence and the remaining 41% categorized as “uncertain.” Additionally, they found that the position of the AL was highly variable in relation to the MF, with only 18% of cases showing a distance of 3 mm or less between these two landmarks. Another anatomical study by Tamas, conducted on 200 rami, revealed that the AL could be definitively identified in only 54% of cases . In our study, the AL was not identifiable in 63 rami (26.25%). Furthermore, hill type AL was detected in 94 rami (39.17%), ridge type AL in 45 rami (18.75%), and plateau type AL in 38 rami (15.83%). Aziz et al. reported that, in most cases, the LM is located below and behind the AL. Pogrel et al. found that the likelihood of the LM being positioned below and behind the AL is 68.3%, with an average distance of 5.39 mm between them. Park et al. demonstrated that, on average, the LM is positioned 4.19 mm posterior and 0.54 mm superior to the AL. Additionally, the MF is located 4.98 mm posterior and 6.95 mm inferior to the AL. In our study, consistent with these findings, it was observed that in all AL types, the LM is positioned more inferiorly compared to the AL. In hill and ridge type ALs, the LM was found to be more posterior to the AL, whereas in plateau type AL, it was positioned more anteriorly. Researchers [ , , ] have suggested that the position of the osteotomy cut in IVRO can be determined based on the position of the AL, with the osteotomy line placed behind the AL to prevent damage to the IAN. However, relying solely on the AL as the primary reference point for determining the osteotomy line may increase the risk of IAN injury. Therefore, incorporating the positions of the LM and MF, particularly their anterior-posterior and superior-inferior dimensions, into the planning process could offer a safer approach when defining the osteotomy line for IVRO. Currently, a standardized anatomical measurement specific to IVRO has not been established, making it inappropriate to use the AL as an absolute reference point during surgical procedures. Instead, preoperative tomographic evaluations assessing the relationships among the MF, LM, and AL positions with AL serving as a guiding reference for IVRO are crucial for safeguarding the IAN. This comprehensive approach could significantly reduce the risk of surgical errors and postoperative complications. This study has several limitations that should be acknowledged. The retrospective nature of the study may introduce inherent biases related to data collection and patient selection. The study population did not consist of actual surgical patients, which may limit the direct clinical applicability of the findings to orthognathic surgery planning. All participants were from a single ethnic background, which may limit the generalizability of the findings to other populations. Future studies with prospective designs, larger and more diverse patient cohorts, and multiple measurement sessions are needed to validate these findings and further refine the clinical implications of antilingula-based osteotomy guidance.
According to the findings of this study, the placement of cuts during IVRO should be determined based on the position of the AL. In hill and ridge type ALs, the LM was observed to be positioned more posteriorly and inferiorly relative to the AL. Therefore, in these types, placing the osteotomy line immediately behind the LM and posterior to the AL is recommended to prevent IAN injury. Conversely, in plateau type AL, the LM was found to be located more anteriorly. In such cases, the osteotomy line should be placed toward the anterior portion of the AL, ensuring that the cuts are positioned further from the LM. However, as AL was not detected in 26.25% of the mandibular rami, it cannot be considered a universal guide for the osteotomy line. Therefore, this approach should only be applied in where the specified AL types are clearly identified. Careful planning of the osteotomy line according to AL types remains essential, both to account for the LM’s position and to protect the IAN.
|
Comparative dissection of the peripheral olfactory system of the Chagas disease vectors | 55ff485c-5b17-45d5-81ad-de55828b00a1 | 8078792 | Physiology[mh] | Chagas disease or American trypanosomiasis, caused by infection with the protozoan Trypanosoma cruzi , is a chronic disease that is endemic in 21 Latin American countries, where it significantly affects the most vulnerable inhabitants. It is estimated that its prevalence in some areas can be as high as 5%, and its annual burden in health care costs sums up to 600 million dollars . Already in 1905 it was shown that blood-sucking insects belonging to the Triatominae subfamily (Heteroptera: Reduviidae) transmit T . cruzi through their faeces. To date, the most effective and successful methods to control the spread of Chagas disease have been vector control policies. Wide-spread use of pesticides and training of local communities to identify and kill the insects are the most efficient strategies to date . However, with the appearance of pesticide-resistant insects, new management strategies are urgently needed. Triatominae is a poorly defined and possibly paraphyletic group of the predaceous true bugs of the family Reduviidae . All 151 described species, phylogenetically grouped into five tribes , are capable of transmitting Chagas disease . From these, some species, such as Rhodnius prolixus and Triatoma infestans , are considered particularly important from an epidemiological standpoint, as they are widely distributed in South America and are often found, though not exclusively, in domiciliated areas . However, most of the species of the Rhodniini tribe, to which R . prolixus belongs, are thought to have a more restricted distribution in sylvatic areas. Within the areas they inhabit, triatomines often find refuge in palm trees and, depending on the number of palm tree species in which they nest, they can be classified as refuge generalists or specialists . While R . prolixus is known to be of the first type, an interesting example of a sylvatic specialist species is Rhodnius brethesi which, so far, has only been found on the palm tree species Leopoldina piassaba . Despite the interesting nature of these associations, studies on sylvatic species has been marginal, with most of the research focused on domiciliated species. However, as deforestation and climate change increase , sylvatic species will lose their natural habitats and might find refuge in domestic and peridomestic areas , putting its inhabitants at higher risk and becoming a public health problem. Thus, in order to design better vector control strategies, a thorough understanding of the differences and similarities between triatomine species having different habitat requirements is needed. Being active at night, triatomines make use of physical and chemical cues to find their hosts . Several studies have highlighted the importance of olfaction for host-seeking behavior in these insects . Terrestrial vertebrates, the main host for these obligatory haematophogaous insects, emit odor signatures that can be composed of up to 1000 different volatiles , many of them being produced by the skin microflora . Previous work has shown that T . infestans and R . prolixus make use of some of these volatiles to find their hosts . In particular, carbon dioxide, 1-octen-3-ol, acetone, several amines, as well as carboxylic acids are attractive cues for R . prolixus , whose detection is achieved by specialized olfactory sensilla in the antenna . However, studies comparing olfactory responses between different triatomine species are lacking. In insects, differences in olfactory tuning have been reported between species of the same genus, and between wild and domestic insects of the same species . In triatomines, previous studies have shown that the number of olfactory sensilla is correlated with the complexity and number of ecotypes in which the insects are found . For instance, domestic species with stable environments have a lower number of chemosensory sensilla than their sylvatic relatives . Furthermore, a reduced expression of olfactory binding proteins (OBPs) and chemosensory proteins (CSPs) are found in domiciliated Triatoma brasiliensis , compared to sylvatic and peridomestic ones. . In this study, we hypothesized that the morphology and tuning of the olfactory system of R . prolixus and R . brethesi reflect the different habitat distribution and requirements of the two species. To test this, we used a comparative approach to characterize the peripheral olfactory system of the widely distributed generalist R . prolixus and the sylvatic specialist R . brethesi at an anatomical and functional level.
Insect rearing Insects were reared as previously described . Adult males of R . brethesi and R . prolixus , starved for 3–4 weeks, were used in the experiments. Batches of insects were kept in individual boxes with a Light:Dark cycle set to 12:12 h. The boxes were placed inside a chamber at 25°C and 60% relative humidity. Each insect was used at the beginning of the scotophase, as it has been shown that olfactory acuity is higher at this timepoint . Laboratory rearing has been shown to have a species-specific impact on the number and distribution of olfactory and mechanosensory sensilla . However, according to previous work, in the case of R . prolixus this effect is either non-existant or only moderate . While, in R . brethesi an increase in the density of mechanosensory sensilla (bristles), and a reduction in the number of trichoid and basiconic sensilla has been observed in laboratory-rerared insects compared to wild ones , it was not possible to include specimens of this species from the field. SEM The heads of the insects, including the antennae were fixed with 2.5% (v/v) glutaraldehyde in cacodylate buffer (pH 7.4) for 60 min. Afterwards, the samples were washed three times for 10 min with cacodylate buffer and dehydrated in ascending ethanol concentrations (30%, 50%, 70%, 90% and 100%) for 10 min each. Subsequently, the samples were dried at the critical-point using liquid carbon dioxide, and sputter coated with gold (approximately 2 nm) using a SCD005 sputter coater (BAL-TEC, Balzers, Liechtenstein). Finally, the relevant surfaces were analyzed with a scanning electron microscope (SEM) LEO-1450 (Carl Zeiss NTS GmbH, Oberkochen, Germany), providing a rotating sample stage to allow all-around imaging. Odors Odors were obtained from Sigma-Aldrich, FLUKA, Aldrich at the highest purity available. Compounds used are listed in and Tables, together with the respective solvent used (paraffin oil, CAS: 8012-95-1, Supelco, USA; distilled water; or ethanol, Sigma Aldrich, Germany), in which each odor was diluted. For electroantennogram (EAG) recordings, a dilution of 10% in paraffin oil (Supelco, USA) was used, while we applied all odors at a dilution of 1% in single sensillum recordings (SSRs). In EAG recordings, certified grade carbon dioxide, diluted to a final concentration of 30% with air, was used. An odor blend, used only in SSR, was created by mixing all compounds listed in in a 1:1 ratio, all at 1% dilution in paraffin oil. The compounds in this blend are known to be detected by odorant receptors (ORs) in other insect species, and were thus designed to identify possible ORs housed in the grooved peg sensilla of Rhodnius spp . Odor application Odors used as stimuli were prepared at the beginning of each experimental session: a 10 μl aliquot of the diluted odor (see and Tables) was pipetted onto a fresh filter paper (Ø = 1 cm 2 , Whatman, Dassel, Germany), which was placed inside a glass Pasteur pipette. Each loaded filter paper was used for a maximum of 3 times to ensure a stable concentration across experiments. Highly volatile carboxylic acids and aldehydes were loaded at each stimulus presentation. Carbon dioxide was diluted shortly before application, and a custom made syringe, which was connected to an automated stimulus controller, was used for its delivery as described before . A stimulus controller (Stimulus Controller CS-550.5, Syntech, Germany) was used to deliver odors to the insect antenna through a metal pipette placed less than 1.5 cm (EAG) or 0.5 cm (SSR) away from the insect antenna. A constant humidified air flow of 1.0 l min -1 was delivered to the insect, while each odor pulse had an airflow of 0.5 l min -1 , and was buffered with compensatory airflow of the same magnitude. Electroantennogram (EAG) recordings An antenna was severed quickly between the scape and the pedicel and placed between two metal electrodes. Conductive gel (Spectra 360, Parker Laboratories, Fairfield, USA) was applied to each end of the antenna. The electrode was connected to a Syntech IDAC analog/digital converter (Syntech). Acquisition was done with Autospike32 at a sample rate of 2400 Hz. While the application of odors was randomized we did ensure to apply the control (paraffin oil) at regular intervals. During the screening of the odor panel, we observed an increase in the response amplitude to the control in function of time. To account for this bias, we normalized each recording with the following formula, similarly to how it has been previously done : A n ( t ) = Z n − C ( t ) , with, C ( t ) = a ( T − t T ) + b ( 1 − T − t T ) , where A n ( t ) is the normalized response to a given odor stimulus n at a given time t ; Z n is the measured response to the odor stimulus n , and C(t) is the averaged solvent response at a given time t , with a being the closest solvent response before stimulus presentation at t a , and b the closest solvent response after stimulus presentation at t b . The contribution of each of these solvent responses to the averaged solvent response is pondered by the factor ( T − t T ) , where T = t b −t a . Single-sensillum recordings (SSR) Insects were placed inside a severed 5 ml plastic tip (Eppendorf, Hamburg, Germany), which was sealed with dental wax (Erkodent, Pfalzgrafenweiler, Germany). The tip was then immobilized on a microscopy slide with dental wax. Both antennae were glued to a coverslip with double-sided tape. A tungsten electrode inserted into the insect’s abdomen was used as reference. Preliminary recordings with a silver wire as a reference electrode did not show an improvement in the signal-to-noise ratio. The preparation was placed under an upright microscope (BX51WI, Olympus, Hamburg, Germany) equipped with a 50x air objective (LMPlanFI 50x/0.5, Olympus). Neural activity in the form of spike trains originating from OSNs was recorded with a sharpened tungsten electrode targeted at the base of a grooved peg sensillum. Signals were amplified (Syntech Universal AC/DC Probe; Syntech) and sampled at 10,666.7 samples s -1 through an USB-IDAC (Syntech) connected to a computer. Spikes were extracted using Autospike32 software (Syntech). Odor responses from each sensillum were calculated as the difference in the number of impulses 0.5 s before and after stimulus onset. The response to each odorant in the SSR recordings was calculated as the change in spikes s -1 upon odor stimulation using Autospike32. The response to the solvent was subtracted from each measurement. The number of OSNs housed in each sensillum in R . prolixus has been estimated to be between 5 and 6 . We attempted to confirm this observation using semi-thin section of the antenna, but, despite our efforts, were unable to decisively identify the number of sensory neurons in the grooved peg (GP) sensilla in any of the two species. For that reason, we decided to define each sensillum as a responding unit, as it has been done in other insects . Subsequent analysis was carried out in MATLAB (The MathWorks Inc, Natick, USA) in which an agglomerative hierarchical clustering of the sensillum responses, with a Euclidean metric and Ward’s method, was performed. The inconsistency coefficient was calculated for each link in the dendrogram, as a way to determine naturally occurring clusters in the data . A depth of 4 and a coefficient cutoff of 1.8 for R . prolixus and 1.0 for R . brethesi were used in the calculation. The response of each sensillum type was taken as the average response of individual sensilla belonging to the same cluster (i.e., sensillum type). The average responses of each sensillum type were then grouped and averaged for the chemical classes. These responses were then normalized to the maximum response within each sensillum type. Principal component analysis (PCA, with a Singular Value Decomposition (SVD) algorithm) was performed in MATLAB (The MathWorks Inc) using the averaged scaled (between 0 and 1) and z-score normalized responses for each sensillum type and each species. As a measure for similarity, we applied a One-Way ANOSIM to calculate whether different sensillum types represent significantly different classes . Averaged responses were computed as the mean of all sensillum responses to a particular odorant. To compare among chemical classes, these odor responses were then further averaged for each particular chemical class. Comparison between species was done using unpaired two-tailed Student t-tests (GraphPad Prism 8, San Diego, USA). These average responses were then normalized to the odor that elicited the hightest responses in each species, being for both species propionic acid, and the lifetime sparseness (S) of each sensillum was calculated. We applied the lifetime sparseness as a measure of the response breadth of each sensillum. For its calculation the following formula was used : S = ( 1 1 − 1 N ) * ( 1 − ( ∑ j = 1 N r j / n ) 2 ∑ j = 1 N r j 2 / N ) ) , where S is the lifetime sparseness, N is the number of tested odors and r j is the sensillum response to any given odor j , with r j ≥0 and S ∈[0,1], where S = 0 corresponds to the case in which the sensillum responds equally to all odorants, and S = 1 to where the sensillum responds to only one odor of the set.
Insects were reared as previously described . Adult males of R . brethesi and R . prolixus , starved for 3–4 weeks, were used in the experiments. Batches of insects were kept in individual boxes with a Light:Dark cycle set to 12:12 h. The boxes were placed inside a chamber at 25°C and 60% relative humidity. Each insect was used at the beginning of the scotophase, as it has been shown that olfactory acuity is higher at this timepoint . Laboratory rearing has been shown to have a species-specific impact on the number and distribution of olfactory and mechanosensory sensilla . However, according to previous work, in the case of R . prolixus this effect is either non-existant or only moderate . While, in R . brethesi an increase in the density of mechanosensory sensilla (bristles), and a reduction in the number of trichoid and basiconic sensilla has been observed in laboratory-rerared insects compared to wild ones , it was not possible to include specimens of this species from the field.
The heads of the insects, including the antennae were fixed with 2.5% (v/v) glutaraldehyde in cacodylate buffer (pH 7.4) for 60 min. Afterwards, the samples were washed three times for 10 min with cacodylate buffer and dehydrated in ascending ethanol concentrations (30%, 50%, 70%, 90% and 100%) for 10 min each. Subsequently, the samples were dried at the critical-point using liquid carbon dioxide, and sputter coated with gold (approximately 2 nm) using a SCD005 sputter coater (BAL-TEC, Balzers, Liechtenstein). Finally, the relevant surfaces were analyzed with a scanning electron microscope (SEM) LEO-1450 (Carl Zeiss NTS GmbH, Oberkochen, Germany), providing a rotating sample stage to allow all-around imaging.
Odors were obtained from Sigma-Aldrich, FLUKA, Aldrich at the highest purity available. Compounds used are listed in and Tables, together with the respective solvent used (paraffin oil, CAS: 8012-95-1, Supelco, USA; distilled water; or ethanol, Sigma Aldrich, Germany), in which each odor was diluted. For electroantennogram (EAG) recordings, a dilution of 10% in paraffin oil (Supelco, USA) was used, while we applied all odors at a dilution of 1% in single sensillum recordings (SSRs). In EAG recordings, certified grade carbon dioxide, diluted to a final concentration of 30% with air, was used. An odor blend, used only in SSR, was created by mixing all compounds listed in in a 1:1 ratio, all at 1% dilution in paraffin oil. The compounds in this blend are known to be detected by odorant receptors (ORs) in other insect species, and were thus designed to identify possible ORs housed in the grooved peg sensilla of Rhodnius spp .
Odors used as stimuli were prepared at the beginning of each experimental session: a 10 μl aliquot of the diluted odor (see and Tables) was pipetted onto a fresh filter paper (Ø = 1 cm 2 , Whatman, Dassel, Germany), which was placed inside a glass Pasteur pipette. Each loaded filter paper was used for a maximum of 3 times to ensure a stable concentration across experiments. Highly volatile carboxylic acids and aldehydes were loaded at each stimulus presentation. Carbon dioxide was diluted shortly before application, and a custom made syringe, which was connected to an automated stimulus controller, was used for its delivery as described before . A stimulus controller (Stimulus Controller CS-550.5, Syntech, Germany) was used to deliver odors to the insect antenna through a metal pipette placed less than 1.5 cm (EAG) or 0.5 cm (SSR) away from the insect antenna. A constant humidified air flow of 1.0 l min -1 was delivered to the insect, while each odor pulse had an airflow of 0.5 l min -1 , and was buffered with compensatory airflow of the same magnitude.
An antenna was severed quickly between the scape and the pedicel and placed between two metal electrodes. Conductive gel (Spectra 360, Parker Laboratories, Fairfield, USA) was applied to each end of the antenna. The electrode was connected to a Syntech IDAC analog/digital converter (Syntech). Acquisition was done with Autospike32 at a sample rate of 2400 Hz. While the application of odors was randomized we did ensure to apply the control (paraffin oil) at regular intervals. During the screening of the odor panel, we observed an increase in the response amplitude to the control in function of time. To account for this bias, we normalized each recording with the following formula, similarly to how it has been previously done : A n ( t ) = Z n − C ( t ) , with, C ( t ) = a ( T − t T ) + b ( 1 − T − t T ) , where A n ( t ) is the normalized response to a given odor stimulus n at a given time t ; Z n is the measured response to the odor stimulus n , and C(t) is the averaged solvent response at a given time t , with a being the closest solvent response before stimulus presentation at t a , and b the closest solvent response after stimulus presentation at t b . The contribution of each of these solvent responses to the averaged solvent response is pondered by the factor ( T − t T ) , where T = t b −t a .
Insects were placed inside a severed 5 ml plastic tip (Eppendorf, Hamburg, Germany), which was sealed with dental wax (Erkodent, Pfalzgrafenweiler, Germany). The tip was then immobilized on a microscopy slide with dental wax. Both antennae were glued to a coverslip with double-sided tape. A tungsten electrode inserted into the insect’s abdomen was used as reference. Preliminary recordings with a silver wire as a reference electrode did not show an improvement in the signal-to-noise ratio. The preparation was placed under an upright microscope (BX51WI, Olympus, Hamburg, Germany) equipped with a 50x air objective (LMPlanFI 50x/0.5, Olympus). Neural activity in the form of spike trains originating from OSNs was recorded with a sharpened tungsten electrode targeted at the base of a grooved peg sensillum. Signals were amplified (Syntech Universal AC/DC Probe; Syntech) and sampled at 10,666.7 samples s -1 through an USB-IDAC (Syntech) connected to a computer. Spikes were extracted using Autospike32 software (Syntech). Odor responses from each sensillum were calculated as the difference in the number of impulses 0.5 s before and after stimulus onset. The response to each odorant in the SSR recordings was calculated as the change in spikes s -1 upon odor stimulation using Autospike32. The response to the solvent was subtracted from each measurement. The number of OSNs housed in each sensillum in R . prolixus has been estimated to be between 5 and 6 . We attempted to confirm this observation using semi-thin section of the antenna, but, despite our efforts, were unable to decisively identify the number of sensory neurons in the grooved peg (GP) sensilla in any of the two species. For that reason, we decided to define each sensillum as a responding unit, as it has been done in other insects . Subsequent analysis was carried out in MATLAB (The MathWorks Inc, Natick, USA) in which an agglomerative hierarchical clustering of the sensillum responses, with a Euclidean metric and Ward’s method, was performed. The inconsistency coefficient was calculated for each link in the dendrogram, as a way to determine naturally occurring clusters in the data . A depth of 4 and a coefficient cutoff of 1.8 for R . prolixus and 1.0 for R . brethesi were used in the calculation. The response of each sensillum type was taken as the average response of individual sensilla belonging to the same cluster (i.e., sensillum type). The average responses of each sensillum type were then grouped and averaged for the chemical classes. These responses were then normalized to the maximum response within each sensillum type. Principal component analysis (PCA, with a Singular Value Decomposition (SVD) algorithm) was performed in MATLAB (The MathWorks Inc) using the averaged scaled (between 0 and 1) and z-score normalized responses for each sensillum type and each species. As a measure for similarity, we applied a One-Way ANOSIM to calculate whether different sensillum types represent significantly different classes . Averaged responses were computed as the mean of all sensillum responses to a particular odorant. To compare among chemical classes, these odor responses were then further averaged for each particular chemical class. Comparison between species was done using unpaired two-tailed Student t-tests (GraphPad Prism 8, San Diego, USA). These average responses were then normalized to the odor that elicited the hightest responses in each species, being for both species propionic acid, and the lifetime sparseness (S) of each sensillum was calculated. We applied the lifetime sparseness as a measure of the response breadth of each sensillum. For its calculation the following formula was used : S = ( 1 1 − 1 N ) * ( 1 − ( ∑ j = 1 N r j / n ) 2 ∑ j = 1 N r j 2 / N ) ) , where S is the lifetime sparseness, N is the number of tested odors and r j is the sensillum response to any given odor j , with r j ≥0 and S ∈[0,1], where S = 0 corresponds to the case in which the sensillum responds equally to all odorants, and S = 1 to where the sensillum responds to only one odor of the set.
Species-specific morphological differences of the antenna Previous studies have demonstrated that sensillum patterns in haematophagous insects, including Triatominae species, reflect specific adaptations to different hosts and habitats . Through a comparative qualitiative and quantitative analysis of the main olfactory organ, the antenna, we assessed potential morphological differences between the generalist R . prolixus and the sylvatic specialist R . brethesi , using scanning electronic microscopy (SEM) (Figs , and ). A qualitative analysis of the morphological patterns of sensilla on the antenna of both species did not reveal major differences. For further analysis, sensilla were classified according to the study by Shanbhag et al. , as it has been previously done in triatomines . In both species, the second segment, or pedicel, was found to be enriched with sensilla described to have a mechanosensory function : sensilla trichobothrium, bristles I and III ( and Figs). Moreover, the cave organ, a sensillum type shown to have a thermo-receptive function in R . prolixus and T . infestans was also found in R . brethesi . However, we were unable to identify the previously described ornamented pore in R . brethesi , possibly due to the angle of orientation of our preparation. Notably, in one preparation, our micrographs show the existence of two previously undescribed sensillum types on the pedicel of R . prolixus . One is a peg-in-pit sensillum with an inflexible socket, with no evident pores, housed within a chamber in the antennal cuticle . This type is reminiscent of the thermosensitive sensilla coeloconica , but its function remains unknown. The second sensillum type resembles a type 3 coeloconic sensilla, characterized by two pores at its base, described in other hemipteran species . Wall-pore sensilla with inflexible sockets were found on flagellomere I and the distal half of flagellomere II in both species. These include the trichoid and basiconic sensilla (also known as thick- and thin-walled trichoid sensilla , respectively), as well as the double-walled grooved peg sensilla (referred to as basiconic sensilla in ) . All of these sensillum types show slight longitudinal grooves filled with pores or slits indicative of an olfactory function . On the same antennal segments of both species we also identified a poreless sensillum with an inflexible socket: the campaniform sensillum . As shown in other studies, quantitative differences in the number of olfactory sensilla of the species potentially reflect particular adaptations to their ecological niche. Thus, we performed autofluorescence confocal scans of glycerol-embedded antennae in order to quantify the sensillum density on the flagellomere II, where putative olfactory sensilla reach the highest density, and inter-specific differences were previously reported among triatomines . Our results show that the density of both basiconic and trichoid sensilla is significantly higher in R . brethesi than in R . prolixus . In contrast, the density of grooved peg sensilla was not significantly different between the two species. Antennal responses in Rhodnius prolixus Previous studies aimed at characterizing odor-evoked responses in triatomines have focused on T . infestans , and reported the response to a small number of chemical compounds, comprising aldehydes, acids and amines . To assess whether other chemical classes are detected by the antennae of of triatomines, we performed EAG recordings in R . prolixus using a panel of 27 odors, belonging to various chemical classes, that have previously been shown to elicit behavioral responses in triatomines and other haematophagous insects ( and Tables). Significant responses (p<0.05) were observed for 33% of the tested odors (one sample t test against zero). Out of these, the strongest response was observed to acetic acid, a compound that is present in triatomine feaces and mediates aggregation , followed by propionic acid, a known host volatile, to which T . infestans is behaviorally active . Significant responses were also seen to the main component of the alarm pheromone , isobutyric acid, a compound that is also present in host volatiles , and to the closely related compound butyric acid. Taken together, the responses to acids represented 44% of the total significant responses. Additionally, R . prolixus showed a significant, though smaller, response to other host volatiles, such as cyclohexanone, amyl acetate and trimethyl amine. A significant response was recorded for 30% carbon dioxide, a chemosensory cue that is attractive for T . infestans at lower concentrations . Interestingly, we also observed a significant olfactory response to butyryl chloride. While this compound is proposed to act as an insect repellant, as it inhibits the activity of the carbon dioxide-detecting sensory neurons in mosquitoes , its function and detection in triatomines has not been studied so far. Odor responses in grooved peg sensilla The EAG recordings of R . prolixus demonstrated that the olfactory system of these insects responds mostly to acids and amines. These compounds are commonly found in the environment of the insects , and their role in regulating odor-guided behavior has been assessed for some species of triatomines . In insects, acids and amines are detected by neurons housed in antennal grooved peg (GP) sensilla . As shown in our morphological studies, this sensillum type is present in the antenna of both Rhodnius species at a low density, making it an ideal system to assess species-specific differences in olfactory tuning. To assess the tuning of individual GP sensilla, here defined as a responding unit, we tested a total of 38 odors, out of which 17 were acids and 9 amines, varying in carbon length and branching. We included additional volatiles (such as indole and amyl acetate) known to be present in, but not exclusive to, vertebrate hosts, or previously shown to be detected by GP sensilla in other insects . In addition, a custom OR blend , composed of compounds typically detected by odorant receptors (ORs) in other species was also applied. Averaged sensillum responses demonstrated that R . brethesi responded generally stronger to odors than R . prolixus . In addition, a significant overall interspecific difference was found for 58% of the odorants. While both species exhibited the strongest response to propionic acid, major differences were seen for the following compounds: butyric acid, benzaldehyde, valeric acid, 2-oxopropanoic acid, formic acid, and for the OR blend, with R . brethesi displaying a higher response than R . prolixus in all cases. Butyryl chloride was the only compound with a significantly higher response in R . prolixus . Stimulation with palmitic acid generated the strongest inhibitory response in both species. When responses were normalized to the maximum odor response (i.e. propionic acid in both species), significant differences remained for five odorants: butyraldehyde, butyryl chloride, amyl acetate, 3-methyl indole, and for the OR blend. Rhodnius prolixus responded, on average, more frequently to amines than R . brethesi accounting for 40% of responses in R . prolixus compared to 24% in R . brethesi . In contrast, R . brethesi , responded more strongly to aldehydes, with 33% compared to 16% in R . prolixus . The responses to acids within each species were comparable and accounted for 18% in R . prolixus and 22% in R . brethesi . Similar results were found for the mixed chemical category ( i . e ., ‘other’) with 18% in R . prolixus and 15% in R . brethesi . Finally, averaged responses of R . brethesi to esters were slightly higher than those found for R . prolixus (14% to 8%). Response tuning of grooved peg sensilla In order to quantify and compare the tuning width of the GP sensilla between both species, we plotted the species-specific tuning curves and determined the lifetime sparseness (S) . The lifetime sparseness is usually calculated to assess how broadly or narrowly tuned olfactory receptors are, while in our case this serves as a measure of the GP-sensillum tuning. This analysis demonstrates that R . prolixus is indeed tuned to a narrower selection of odors than R . brethesi with an S-value of 0.5 for R . prolixus compared to 0.35 in R . brethesi . We next wondered whether the stronger responses observed for R . brethesi result from a higher proportion of individual sensilla showing excitatory odor-evoked responses or, less likely, a decrease in inhibitory sensillum responses in R . brethesi compared to R . prolixus . Thus, in order to further characterize these responses, we analyzed single odor-sensillum combinations. Since each GP sensillum was screened with a comprehensive odor panel composed of a total of 38 odors, our SSR data comprised 950 odor-sensillum combinations in R . prolixus , and 380 odor-sensillum combinations in R . brethesi . While in R . prolixus , only 31% of these odor-sensillum combinations yielded responses >15 spikes s -1 above solvent response, 60% did in R . brethesi . This difference was also consistent at responses with higher spike frequencies (>50 spikes s -1 ): in R . prolixus , only 7% of these combinations held responses higher than 50 spikes s -1 above solvent, while in R . brethesi 26% of the odor-sensillum combinations resulted in responses of >50 spikes s -1 . Responses above 100 spikes s -1 were generally scarce in both species. Inhibitory responses were less prevalent than excitatory ones, with only 5% of the odor-sensillum combinations identified as inhibitory (<-15 spikes s -1 compared to solvent control), in both R . prolixus and R . brethesi . Inhibition could not be attributed to a single odorant since 53% of the odors in the panel generated at least one odor-sensillum inhibition in R . prolixus , and 32% of the odors resulted in an inhibition in R . brethesi . Taken together, our data suggests that the stronger responses seen in R . brethesi can be attributed to a higher proportion of responses being above 15 spikes s -1 , and not to a difference in inhibitory responses between these species. Funcional classification of grooved peg sensilla To further assign the measured odor responses to distinct and functional GP sensillum subtypes in each of the two species, we performed an agglomerative hierarchical clustering analysis . Responses could be clustered into 4 groups in each species, corresponding to putative functional sensillum types classified as GP1 to GP4. It should be noted that, in both species, all of the sensillum types responded to butyric acid, as well as to propionic acid. In particular, strong responses (i.e. > 50 spikes s -1 ) to acids were more prominent in R . brethesi , with all of the sensillum types responding to at least 7 out of the 17 acid compounds tested. A major difference between the species was the response to our custom OR blend. While only one sensillum responded to it in R . prolixus (with >50 spikes s -1 ), 50% of the sensilla showed a response to the blend in R . brethesi . As each of the four putative sensillum types responded to a particular combination of odors , we propose these as diagnostic odors for each specific GP type . In R . prolixus , GP type 1 (Rp-GP1), which accounts for 40% of the GP-sensilla recorded from, responds preferentially to the amines trimethylamine, ammonia and ethylamine, as evidenced by the average responses. Rp-GP2 comprises 16% of the GP-sensilla and responds best to propionic acid, triethylamine, spermine, spermidine and benzaldehyde. Rp-GP3 shows the highest responses to isoamylamine and butyryl chloride and stands for 28% of the GP-sensilla, while Rp-GP4, representing 16% of the sensilla, responds to ammonia, ethylamine and butyryl chloride. In R . brethesi , the type 1 GP-sensillum (Rb-GP1) responded preferentially to butyric acid and was inhibited by amyl acetate . The Rb-GP2, with a similar response profile to Rb-GP1, differed from it in the responses to amyl acetate and 2-oxopropanoic acid. It also showed a higher response to isoamyl acetate, butyric, valeric and formic acids than Rb-GP1. The Rb-GP3 type showed high responses to 2-oxopropanoic acid and formic acid. Finally, type GP4 of R . brethesi showed a strong response to benzaldehyde, ammonia and propionaldehyde. Rb-GP1 represented 30%, Rb-GP2 20%, Rb-GP3 20% and Rb-GP4 30% of the total number of grooved peg sensilla recorded from in this species. Odor tuning to chemical classes Next, we analyzed whether the individual sensillum types respond preferentially to particular chemical classes . In R . prolixus , Rp-GP1 responded strongest to amines, Rp-GP2 to aldehydes and to a lesser extent to amines. Rp-GP3 did not respond preferentially to any chemical odor class, with most responses being to butyryl chloride, and Rp-GP4 showed the strongest responses to amines. In R . brethesi , all of the sensillum types responded to, at least, two of the chemical classes tested. While both Rb-GP1 and Rb-GP3 showed the strongest responses to acids, Rb-GP3 but not Rb-GP1 responded additionally to aldehydes. Rb-GP2 did not respond to any particular odor class, with its highest responses shown to the OR blend. Finally, Rb-GP4 responded mainly to aldehydes, followed by amines. We next evaluated whether odor compounds with a certain carbon length evoked stronger responses in the Rhodnius grooved peg sensilla by focusing on C1-to-C18 of acids and amines . In R . prolixus , we observed higher responses for short chain carboxylic acids (C1-6/7), with three of the sensillum types showing a negative correlation between carbon chain length and response strength (Pearson correlation; Rp-GP1: r = -0.87, p = 0.0005; Rp-GP2: r = -0.59, p = 0.054; Rp-GP3: r = -0.75, p = 0.008; Rp-GP4: -0.61, p = 0.045, n = 11). Interestingly, GP2 of R . prolixus showed weaker responses to short chain amines, but stronger ones to those with long chains (C6-C10). Acid carbon chain length also appeared to be relevant for R . brethesi , where it was negatively correlated with response intensity in 2 out of the 4 sensillum types (Pearson correlation; Rb-GP1: r = -0.89, p = 0.0002; Rb-GP2: r = -0.76, p = 0.006, n = 11). In contrast, in the case of the amines, a decrease in activity with increasing carbon length was seen in GP4 (Pearson correlation; r = -0.85; p = 0.016, n = 7) in R . prolixus but not in other GP sensillum types of R . brethesi . However, when compared to R . prolixus , R . brethesi displayed stronger responses to short chain (C1-C5) amines ( R . prolixus : 19.63 ± 2.65, n = 125; R . brethesi : 19.63 ± 2.65, n = 51; unpaired t test, p = 0.0005). Finally, we addressed the comparability of the described functional sensillum types between species. In order to get a notion of similarity between the GP types described, we calculated the Euclidean distances between the sensillum types for the two species . The averaged response values were first z-score normalized (mean = 0, standard deviation = 1), to ensure that the distance measured reflects dissimilarities between response patterns and not magnitude. The sensillum pair that showed the lowest distance was GP2 in R . prolixus (Rp-GP2) and GP3 in R . brethesi (Rb-GP3; distance = 4.47). The pair Rp-GP4 and Rb-GP3 was on the other end of the spectrum, with the highest distance (8.24). In between we found most (88%) of the sensillum combinations to be within the range of 6–8.3 units of distance. In order to further explore the differences between the two species, we performed a principal component analysis (PCA) in which the 38-dimensional sensillum space was reduced to lower dimensions . We focused on the first two components, which together explain 60% of the variance. While the sensillum types of R . brethesi appeared to be more densly clustered than the ones of R . prolixus , the distance between individual sensillum types was larger within an individual species than between species (ANOSIM, R = 0.09, p = 0.32). Taken together, these results show that sensillum subtypes are not necessarily species-specific, despite each showing a different odor tuning breath and responding to a specific set of ligands.
Previous studies have demonstrated that sensillum patterns in haematophagous insects, including Triatominae species, reflect specific adaptations to different hosts and habitats . Through a comparative qualitiative and quantitative analysis of the main olfactory organ, the antenna, we assessed potential morphological differences between the generalist R . prolixus and the sylvatic specialist R . brethesi , using scanning electronic microscopy (SEM) (Figs , and ). A qualitative analysis of the morphological patterns of sensilla on the antenna of both species did not reveal major differences. For further analysis, sensilla were classified according to the study by Shanbhag et al. , as it has been previously done in triatomines . In both species, the second segment, or pedicel, was found to be enriched with sensilla described to have a mechanosensory function : sensilla trichobothrium, bristles I and III ( and Figs). Moreover, the cave organ, a sensillum type shown to have a thermo-receptive function in R . prolixus and T . infestans was also found in R . brethesi . However, we were unable to identify the previously described ornamented pore in R . brethesi , possibly due to the angle of orientation of our preparation. Notably, in one preparation, our micrographs show the existence of two previously undescribed sensillum types on the pedicel of R . prolixus . One is a peg-in-pit sensillum with an inflexible socket, with no evident pores, housed within a chamber in the antennal cuticle . This type is reminiscent of the thermosensitive sensilla coeloconica , but its function remains unknown. The second sensillum type resembles a type 3 coeloconic sensilla, characterized by two pores at its base, described in other hemipteran species . Wall-pore sensilla with inflexible sockets were found on flagellomere I and the distal half of flagellomere II in both species. These include the trichoid and basiconic sensilla (also known as thick- and thin-walled trichoid sensilla , respectively), as well as the double-walled grooved peg sensilla (referred to as basiconic sensilla in ) . All of these sensillum types show slight longitudinal grooves filled with pores or slits indicative of an olfactory function . On the same antennal segments of both species we also identified a poreless sensillum with an inflexible socket: the campaniform sensillum . As shown in other studies, quantitative differences in the number of olfactory sensilla of the species potentially reflect particular adaptations to their ecological niche. Thus, we performed autofluorescence confocal scans of glycerol-embedded antennae in order to quantify the sensillum density on the flagellomere II, where putative olfactory sensilla reach the highest density, and inter-specific differences were previously reported among triatomines . Our results show that the density of both basiconic and trichoid sensilla is significantly higher in R . brethesi than in R . prolixus . In contrast, the density of grooved peg sensilla was not significantly different between the two species.
Rhodnius prolixus Previous studies aimed at characterizing odor-evoked responses in triatomines have focused on T . infestans , and reported the response to a small number of chemical compounds, comprising aldehydes, acids and amines . To assess whether other chemical classes are detected by the antennae of of triatomines, we performed EAG recordings in R . prolixus using a panel of 27 odors, belonging to various chemical classes, that have previously been shown to elicit behavioral responses in triatomines and other haematophagous insects ( and Tables). Significant responses (p<0.05) were observed for 33% of the tested odors (one sample t test against zero). Out of these, the strongest response was observed to acetic acid, a compound that is present in triatomine feaces and mediates aggregation , followed by propionic acid, a known host volatile, to which T . infestans is behaviorally active . Significant responses were also seen to the main component of the alarm pheromone , isobutyric acid, a compound that is also present in host volatiles , and to the closely related compound butyric acid. Taken together, the responses to acids represented 44% of the total significant responses. Additionally, R . prolixus showed a significant, though smaller, response to other host volatiles, such as cyclohexanone, amyl acetate and trimethyl amine. A significant response was recorded for 30% carbon dioxide, a chemosensory cue that is attractive for T . infestans at lower concentrations . Interestingly, we also observed a significant olfactory response to butyryl chloride. While this compound is proposed to act as an insect repellant, as it inhibits the activity of the carbon dioxide-detecting sensory neurons in mosquitoes , its function and detection in triatomines has not been studied so far.
The EAG recordings of R . prolixus demonstrated that the olfactory system of these insects responds mostly to acids and amines. These compounds are commonly found in the environment of the insects , and their role in regulating odor-guided behavior has been assessed for some species of triatomines . In insects, acids and amines are detected by neurons housed in antennal grooved peg (GP) sensilla . As shown in our morphological studies, this sensillum type is present in the antenna of both Rhodnius species at a low density, making it an ideal system to assess species-specific differences in olfactory tuning. To assess the tuning of individual GP sensilla, here defined as a responding unit, we tested a total of 38 odors, out of which 17 were acids and 9 amines, varying in carbon length and branching. We included additional volatiles (such as indole and amyl acetate) known to be present in, but not exclusive to, vertebrate hosts, or previously shown to be detected by GP sensilla in other insects . In addition, a custom OR blend , composed of compounds typically detected by odorant receptors (ORs) in other species was also applied. Averaged sensillum responses demonstrated that R . brethesi responded generally stronger to odors than R . prolixus . In addition, a significant overall interspecific difference was found for 58% of the odorants. While both species exhibited the strongest response to propionic acid, major differences were seen for the following compounds: butyric acid, benzaldehyde, valeric acid, 2-oxopropanoic acid, formic acid, and for the OR blend, with R . brethesi displaying a higher response than R . prolixus in all cases. Butyryl chloride was the only compound with a significantly higher response in R . prolixus . Stimulation with palmitic acid generated the strongest inhibitory response in both species. When responses were normalized to the maximum odor response (i.e. propionic acid in both species), significant differences remained for five odorants: butyraldehyde, butyryl chloride, amyl acetate, 3-methyl indole, and for the OR blend. Rhodnius prolixus responded, on average, more frequently to amines than R . brethesi accounting for 40% of responses in R . prolixus compared to 24% in R . brethesi . In contrast, R . brethesi , responded more strongly to aldehydes, with 33% compared to 16% in R . prolixus . The responses to acids within each species were comparable and accounted for 18% in R . prolixus and 22% in R . brethesi . Similar results were found for the mixed chemical category ( i . e ., ‘other’) with 18% in R . prolixus and 15% in R . brethesi . Finally, averaged responses of R . brethesi to esters were slightly higher than those found for R . prolixus (14% to 8%).
In order to quantify and compare the tuning width of the GP sensilla between both species, we plotted the species-specific tuning curves and determined the lifetime sparseness (S) . The lifetime sparseness is usually calculated to assess how broadly or narrowly tuned olfactory receptors are, while in our case this serves as a measure of the GP-sensillum tuning. This analysis demonstrates that R . prolixus is indeed tuned to a narrower selection of odors than R . brethesi with an S-value of 0.5 for R . prolixus compared to 0.35 in R . brethesi . We next wondered whether the stronger responses observed for R . brethesi result from a higher proportion of individual sensilla showing excitatory odor-evoked responses or, less likely, a decrease in inhibitory sensillum responses in R . brethesi compared to R . prolixus . Thus, in order to further characterize these responses, we analyzed single odor-sensillum combinations. Since each GP sensillum was screened with a comprehensive odor panel composed of a total of 38 odors, our SSR data comprised 950 odor-sensillum combinations in R . prolixus , and 380 odor-sensillum combinations in R . brethesi . While in R . prolixus , only 31% of these odor-sensillum combinations yielded responses >15 spikes s -1 above solvent response, 60% did in R . brethesi . This difference was also consistent at responses with higher spike frequencies (>50 spikes s -1 ): in R . prolixus , only 7% of these combinations held responses higher than 50 spikes s -1 above solvent, while in R . brethesi 26% of the odor-sensillum combinations resulted in responses of >50 spikes s -1 . Responses above 100 spikes s -1 were generally scarce in both species. Inhibitory responses were less prevalent than excitatory ones, with only 5% of the odor-sensillum combinations identified as inhibitory (<-15 spikes s -1 compared to solvent control), in both R . prolixus and R . brethesi . Inhibition could not be attributed to a single odorant since 53% of the odors in the panel generated at least one odor-sensillum inhibition in R . prolixus , and 32% of the odors resulted in an inhibition in R . brethesi . Taken together, our data suggests that the stronger responses seen in R . brethesi can be attributed to a higher proportion of responses being above 15 spikes s -1 , and not to a difference in inhibitory responses between these species.
To further assign the measured odor responses to distinct and functional GP sensillum subtypes in each of the two species, we performed an agglomerative hierarchical clustering analysis . Responses could be clustered into 4 groups in each species, corresponding to putative functional sensillum types classified as GP1 to GP4. It should be noted that, in both species, all of the sensillum types responded to butyric acid, as well as to propionic acid. In particular, strong responses (i.e. > 50 spikes s -1 ) to acids were more prominent in R . brethesi , with all of the sensillum types responding to at least 7 out of the 17 acid compounds tested. A major difference between the species was the response to our custom OR blend. While only one sensillum responded to it in R . prolixus (with >50 spikes s -1 ), 50% of the sensilla showed a response to the blend in R . brethesi . As each of the four putative sensillum types responded to a particular combination of odors , we propose these as diagnostic odors for each specific GP type . In R . prolixus , GP type 1 (Rp-GP1), which accounts for 40% of the GP-sensilla recorded from, responds preferentially to the amines trimethylamine, ammonia and ethylamine, as evidenced by the average responses. Rp-GP2 comprises 16% of the GP-sensilla and responds best to propionic acid, triethylamine, spermine, spermidine and benzaldehyde. Rp-GP3 shows the highest responses to isoamylamine and butyryl chloride and stands for 28% of the GP-sensilla, while Rp-GP4, representing 16% of the sensilla, responds to ammonia, ethylamine and butyryl chloride. In R . brethesi , the type 1 GP-sensillum (Rb-GP1) responded preferentially to butyric acid and was inhibited by amyl acetate . The Rb-GP2, with a similar response profile to Rb-GP1, differed from it in the responses to amyl acetate and 2-oxopropanoic acid. It also showed a higher response to isoamyl acetate, butyric, valeric and formic acids than Rb-GP1. The Rb-GP3 type showed high responses to 2-oxopropanoic acid and formic acid. Finally, type GP4 of R . brethesi showed a strong response to benzaldehyde, ammonia and propionaldehyde. Rb-GP1 represented 30%, Rb-GP2 20%, Rb-GP3 20% and Rb-GP4 30% of the total number of grooved peg sensilla recorded from in this species.
Next, we analyzed whether the individual sensillum types respond preferentially to particular chemical classes . In R . prolixus , Rp-GP1 responded strongest to amines, Rp-GP2 to aldehydes and to a lesser extent to amines. Rp-GP3 did not respond preferentially to any chemical odor class, with most responses being to butyryl chloride, and Rp-GP4 showed the strongest responses to amines. In R . brethesi , all of the sensillum types responded to, at least, two of the chemical classes tested. While both Rb-GP1 and Rb-GP3 showed the strongest responses to acids, Rb-GP3 but not Rb-GP1 responded additionally to aldehydes. Rb-GP2 did not respond to any particular odor class, with its highest responses shown to the OR blend. Finally, Rb-GP4 responded mainly to aldehydes, followed by amines. We next evaluated whether odor compounds with a certain carbon length evoked stronger responses in the Rhodnius grooved peg sensilla by focusing on C1-to-C18 of acids and amines . In R . prolixus , we observed higher responses for short chain carboxylic acids (C1-6/7), with three of the sensillum types showing a negative correlation between carbon chain length and response strength (Pearson correlation; Rp-GP1: r = -0.87, p = 0.0005; Rp-GP2: r = -0.59, p = 0.054; Rp-GP3: r = -0.75, p = 0.008; Rp-GP4: -0.61, p = 0.045, n = 11). Interestingly, GP2 of R . prolixus showed weaker responses to short chain amines, but stronger ones to those with long chains (C6-C10). Acid carbon chain length also appeared to be relevant for R . brethesi , where it was negatively correlated with response intensity in 2 out of the 4 sensillum types (Pearson correlation; Rb-GP1: r = -0.89, p = 0.0002; Rb-GP2: r = -0.76, p = 0.006, n = 11). In contrast, in the case of the amines, a decrease in activity with increasing carbon length was seen in GP4 (Pearson correlation; r = -0.85; p = 0.016, n = 7) in R . prolixus but not in other GP sensillum types of R . brethesi . However, when compared to R . prolixus , R . brethesi displayed stronger responses to short chain (C1-C5) amines ( R . prolixus : 19.63 ± 2.65, n = 125; R . brethesi : 19.63 ± 2.65, n = 51; unpaired t test, p = 0.0005). Finally, we addressed the comparability of the described functional sensillum types between species. In order to get a notion of similarity between the GP types described, we calculated the Euclidean distances between the sensillum types for the two species . The averaged response values were first z-score normalized (mean = 0, standard deviation = 1), to ensure that the distance measured reflects dissimilarities between response patterns and not magnitude. The sensillum pair that showed the lowest distance was GP2 in R . prolixus (Rp-GP2) and GP3 in R . brethesi (Rb-GP3; distance = 4.47). The pair Rp-GP4 and Rb-GP3 was on the other end of the spectrum, with the highest distance (8.24). In between we found most (88%) of the sensillum combinations to be within the range of 6–8.3 units of distance. In order to further explore the differences between the two species, we performed a principal component analysis (PCA) in which the 38-dimensional sensillum space was reduced to lower dimensions . We focused on the first two components, which together explain 60% of the variance. While the sensillum types of R . brethesi appeared to be more densly clustered than the ones of R . prolixus , the distance between individual sensillum types was larger within an individual species than between species (ANOSIM, R = 0.09, p = 0.32). Taken together, these results show that sensillum subtypes are not necessarily species-specific, despite each showing a different odor tuning breath and responding to a specific set of ligands.
In this study morphological and functional differences in the peripheral olfactory system of R . prolixus , and R . brethesi , two species differing in distribution and refuge habitat, were assessed. Morphological differences, in terms of a higher density of basiconic and trichoid sensilla in R . brethesi compared to R . prolixus , were found, a character previously described in other triatomines . A correlation between the number of olfactory sensilla and habitat range has been proposed for triatomines, as well as for other haematophagous insect species . However, this remains to be confirmed for R . prolixus and R . brethesi , as rearing conditions may negatively affect sensillum numbers . However, our results are intriguing as R . brethesi is a refugee specialist, nesting in only one species of palm tree, in sylvatic environments, suggesting that this species may have a different need or uses a different strategy for detecting and discriminating odors, compared to R . prolixus . To assess the olfactory function of the R . prolixus antenna we initially used EAG analysis with known biologically active compounds, previously shown to be involved in intraspecific communication and other odor-guided behaviors. Surprisingly, the antenna of R . prolixus responded only to a limited number of the compounds tested. For instance, we did not see a significant response to either 2-butanone or 3-methyl-2-butanol, compounds known to be part of the sexual pheromone . We hypothesize that the lack of antennal response is likely a consequence of the low number of specialized neurons detecting these compounds, and that the limited sensitivity of the EAG analysis failed to provide a reliable signal. However, it might be possible that these chemicals are detected by organs other than the antenna, as several odorant and iontropic receptors (ORs and IRs), along with the odorant co-receptor orco are expressed also in tarsi, genitalia and rosti . Most of the odorants evoking a significant antennal response were volatiles characteristic of the vertebrate (amniote) odor signature, such as acetic acid, propionic acid, butyric acid, isobutyric acid, ethyl pyruvate, trimethyl amine and carbon dioxide. All of these volatile compounds have been identified in the headspace of vertebrates, and males and females of R . prolixus have been demonstrated to be attracted to acetic and isobutyric acid . These compounds, however, in addition to often occurring in vertebrate host secretions, are also used in intraspecific communication , highlighting the importance of sensory parsimony in these insects . In insects, ORs and IRs are responsible for the detection of volatile molecules. IRs are thought to be ancestral, as they are found in basal insects and in their most recent phylogenetic antecessor . These receptors, expressed in the dendrites of OSNs housed in grooved peg sensilla (i.e. double walled sensilla), serve a conserved function in the detection of acids and amines across insect taxa . Yet, we show that triatomine insects, with different habitat and host requirements, show differences in the olfactory tuning of their GP sensilla. While both species respond to acids and amines varying in branch and carbon length, R . prolixus appears to be more tuned to amines than its sylvatic sibling. This result goes in line with a previous study showing that R . prolixus is attracted to amines present in vertebrate-hosts excresions which guide their host search . Both species differed in their responses to certain odorants. R . prolixus displayed a significantly stronger response to butyryl chloride than R . brethesi . It is assumed that this odor compound has a repellant function in mosquitoes by inhibiting the activity of carbon dioxide-responding neurons . Whether butyryl chloride serves a similar role in triatomines requires confirmation. Interestingly, R . brethesi revealed higher responses to amyl acetate, a compound found in fruits , 3-methyl indole, which occurs in feces and in inflorescences at low concentrations , butyraldehyde and to the OR blend. Previous studies have shown that compounds present in this blend are detected by ORs in other insect species . While only one of the sensilla probed in R . prolixus responded to this blend, half of them did in R . brethesi . This suggests that ORs may be present in the grooved peg sensilla of R . brethesi but not of R . prolixus , as is the case, for instance, for the odorant receptor OR35a, expressed in the coeloconic sensillum of Drosophila melanogaster . Overall, these differences might reflect specific adaptations to their corresponding environments. Based on their odor response profiles, we identified four functional sensillum subtypes in each species. This is in contrast with studies on T . infestans , where only three grooved peg sensillum types were described for 5 th instar nymphs . This discrepancy might be due to several reasons. First, it is possible that differences among triatomine species are larger than expected, as suggested e.g., by ultra-structural studies demonstrating different number of OSNs in GP sensilla of triatomines . Second, different patterns of behavior in response to odorants are also recognizable between T . infestans and R . prolixus . Third, we recorded responses of adults, whereas 5 th instar nymphs were examined in the case of T . infestans . The antenna of R . prolixus undergoes significant changes between the 5 th instar and adult, with an increased number of olfactory sensilla on flagellomeres I and II , probably related to intraspecific communication or behavioral needs. These changes might account for the additional sensillum subtype observed in adults of Rhodnius . Fourth, and lastly, in our screening we recorded responses from a higher number of chemicals than in previous studies , potentially improving the resolution of physiological sensillum subtypes. It should be noted here, however, that our investigation is not conclusive, and recordings with additional compounds might help to complete the ongoing work of sensilla classification in R . prolixus and R . brethesi . Furthermore, it would be interesting to analyze dose-response characteristics to identify the best odor ligands for the different GP sensilla types in future studies. Interestingly, R . brethesi presented overall higher and broader olfactory responses than R . prolixus , as reflected in the average response, sensillum odor tuning and lifetime sparseness in SSR data, suggesting a sensory differentiation between these species. In insects, the number of olfactory receptors and the complexity of the ecological niche seems to be highly correlated, with the number of ORs increasing with niche complexity. For example, while tsetse flies have only 40–46 ORs, eusocial insects like ants possess over 350 ORs . Moreover, in mosquitoes, host preference has been suggested to account for differences in the chemosensory gene repertoire between sibling species . Notably, in triatomines, olfactory binding proteins (OBPs) and chemosensory proteins (CSPs) are present at lower expression levels in domestic insects of T . brasiliensis , compared to sylvatic and peridomestic ones . Given that R . prolixus is not exclusively domiciliated, and the current lack of data on the chemical cues that sylvatic triatomine species encounter in the wild, we are unable to conclusively determine whether the observed differences reflect an adaptation of the two Rhodnius species to their specific habitats. Future experiments, comparing wild and domicilated individuals of R . prolixus are required to further shed light on this hypothesis. It is important to note that triatomines process odor information in the context of other sensory cues . In fact, R . prolixus presents an astounding thermosensitivity, and heat represents the main host-associated cue for these insects . Therefore, in the context of our results, it might be that R . prolixus relies less heavily on olfactory cues than R . brethesi , similar to what previously has been observed in other insect species . To conclude, our results confirm previous observations of phenotypic plasticity in the Rhodnius genus. We demonstrate that the species not only differ in the morphology of their sensory equipment, but also functionally, with R . prolixus presenting a distinctly decreased olfactory function. It is likely that the condition found in the sylvatic species represents the ancestral character state in the subfamily, whereas a derived reduced condition might be associated with changes in habitat preference. With the ongoing rapid destruction of natural environments , it is likely that more species will follow this path. Careful analyses of differences and potential shifts in the sensory apparatus may turn out as helpful in the design of efficient future vector control strategies.
S1 Fig Scanning electron microscopy (SEM) of antennal sensilla of Rhodnius prolixus . Arrows indicate (A) sensillum trichobothrium (I) and bristle II (II), (B) peg-in-pit sensilla, (C) bristle III, (D) ornamented pore, and (E) type 3 coeloconic sensilla, on the pedicel of the antenna. (TIF) Click here for additional data file. S2 Fig Scanning electron microscopy (SEM) of antennal sensilla of Rhodnius brethesi . Arrows indicate (A) sensillum trichobothrium, (B) cave organ at the pedicel (I) and bristle III (II), (B’) detail of the cave organ, (C) basiconic (also known as thin-walled trichoid) sensillum presenting the ecdysis channel, (D) coeloconic sensillum. (TIF) Click here for additional data file. S1 Table Description of the odor panel used in electrophysiological experiments. (XLSX) Click here for additional data file. S2 Table Chemical compounds used in EAG recordings. (TIF) Click here for additional data file. S3 Table Chemical compounds used in SSR recordings. (TIF) Click here for additional data file. S4 Table Euclidean distance for the z-score normalized sensillum types of R . prolixus and R . brethesi . (TIF) Click here for additional data file.
|
A Highly Sensitive Pan-Cancer Test for Microsatellite Instability | 69c44edd-90e1-4dab-837d-f4bef33e136d | 10629437 | Anatomy[mh] | Microsatellites are 1 to 6 bp short tandem DNA repeats constituting approximately 3% of the human genome. Microsatellites are prone to DNA replication errors resulting from polymerase slippage, which are effectively corrected by the DNA mismatch repair (MMR) system. Inactivation of any of the MMR genes ( MLH1 , MSH2 , MSH6 , and PMS2) results in hypermutability of these microsatellite repeats, a condition referred to as microsatellite instability (MSI). , , , Individuals carrying a germline pathogenic variant in one copy of an MMR gene are said to have Lynch syndrome and have up to an 80% lifetime risk of developing cancer of the colon, endometrium, stomach, ovary, small intestine, hepatobiliary tract, urinary tract, pancreas, prostate, or brain or developing sebaceous skin tumors. Tumors with an MSI phenotype can arise from loss of both alleles of an MMR gene, either via somatic loss of the second MMR allele in an individual with Lynch syndrome or by other mechanisms, including somatic biallelic MMR gene mutation or somatic biallelic hypermethylation of the MLH1 gene causing nonhereditary sporadic MSI tumors. , , ,
Detection of MMR deficiency is determined by assessing MMR protein levels by immunohistochemistry (IHC) or functionally by MSI testing, with proficient mismatch repair (pMMR) tumors exhibiting normal MMR protein levels and lack of MSI, and deficient mismatch repair (dMMR) tumors exhibiting loss of one or more MMR proteins and presence of MSI. , Both MSI and IHC assays are sensitive tests for detection of loss of MMR activity, and results from the two tests are usually highly concordant and complementary. A recent comparison of the PCR-based MSI Analysis System, version 1.2 (Promega Corporation, Madison, WI; also known as the Promega pentaplex panel), with IHC in a large population-based study of colorectal cancer (CRC) found that concordance between the two methods was approximately 97%. The concordance between MSI-PCR and IHC testing for endometrial cancers is also high but is dependent on MSI-PCR analysis methods and the microsatellite marker panels used. , , , Diagnosis of Lynch syndrome is a multistep process that begins with MSI-PCR or IHC screening for detection of MMR deficiency. Subsequently, dMMR tumors are tested for the presence of MLH1 promoter methylation or the BRAF V600E mutation in CRC to differentiate from Lynch syndrome tumors, which rarely exhibit these molecular features. Germline sequencing for pathogenic variants in any of the four MMR genes and EPCAM (deletions in the 3′ end of the EPCAM gene can cause methylation-induced transcriptional silencing of MSH2 ) is then performed on suspected cases of Lynch syndrome to confirm diagnosis. The guidelines from the 1998 National Cancer Institute workshop referred to as the Bethesda guidelines recommended a reference panel of five microsatellite markers consisting of two mononucleotide repeats ( BAT-25 and BAT-26 ) and three dinucleotide repeats ( D2S123 , D5S346 , and D17S250 ) for MSI-PCR testing. Instability in two or more of these markers classifies an individual as MSI-High (MSI-H), one marker as MSI-Low, and zero markers as MSI stable (MSS). In 2004, the Bethesda guidelines were revised to recommend the use of a panel consisting entirely of mononucleotide repeats to further increase sensitivity and specificity for detecting MSI. , Discordances of 3% to 5% between MSI-PCR and MMR IHC assays can occur via different mechanisms. , , , , , False-negative IHC results can occur in MSI-H tumors expressing a nonfunctional protein with retained antigenicity. , , , , False-positive IHC test results have been reported in MSS tumors with loss of MSH6 expression after neoadjuvant therapy. Interpretation of IHC staining results can be challenging because tumors rarely exhibit loss of an MMR protein throughout the entire sample, and the definition of what constitutes abnormal MMR expression is still evolving. , This intratumor heterogeneity contributes to variable interpretation by observers, as does the experience of the pathologist and the guidelines being followed. , , , Conversely, current MSI-PCR tests are reportedly less sensitive for certain types of cancers, especially tumors with MSH6 loss, or in samples with low tumor cellularity. , MSI-PCR sensitivity can vary depending on which MSI marker panel is used and the method of analysis. The use of outdated or poorly designed microsatellite marker panels and interpretation methods probably contributes to the reported lower sensitivity of MSI compared with IHC in some cancer types. , , ,
Determination of MSI status in cancers is of clinical importance because of its diagnostic, prognostic, and therapeutic significance. Universal MSI testing for all CRC and endometrial cancers is now recommended, regardless of age at diagnosis or family history, by many professional organizations and guidelines, including the National Comprehensive Cancer Network. MSI-High cancers are more common in early-stage cancers and are therefore associated with a more favorable prognosis. For example, about 20% of stage I to II, 12% of stage III, and 4% to 5% of stage IV metastatic CRC are MSI-H. In recent years, MSI has gained considerable attention because of its role in predicting patient response to immune checkpoint inhibitor therapy across multiple tumor types. , Tumors with MMR deficiency elicit a positive immune response due to the expression of neoantigens by the tumor cells. , , MMR-deficient cancer cells producing neoantigens may evade the immune system by up-regulation of inhibitory pathways, including the programmed cell death 1/programmed death ligand-1 immune checkpoint, and blockade of this inhibitory pathway with monoclonal antibodies permits an antitumor immune response. The first clinical trial for immune checkpoint blockade with pembrolizumab (Merck, Kenilworth, NJ), a programmed cell death 1 immune checkpoint inhibitor, found that the objective response rate was 40% for patients with MSI-High CRC and 0% with MSS CRC. In the follow-up trial on 12 different cancer types with evidence of MMR deficiency, the objective response rate was 53%, and complete responses were achieved in 21% of patients. Based on results of these clinical trials, the US Food and Drug Administration approved the use of pembrolizumab for previously treated patients with MSI-H/dMMR advanced or metastatic solid tumors in 2017. More recently, first-line treatment with pembrolizumab monotherapy was found to provide significant and clinically meaningful improvements in progression-free survival compared with standard of care with chemotherapy as first-line treatment in patients with MSI-H/dMMR metastatic CRC. Other immune checkpoint blockade therapies are being evaluated for a variety of cancer types, and predictive biomarkers of immunotherapy such as MSI status are urgently needed to identify individuals who are likely to respond.
MSI has been identified in most cancer types, with varying prevalence ranging from a high of approximately 30% in endometrial cancers to a low of approximately 0.1% in melanoma. , , , , , , Using next-generation sequencing (NGS) of microsatellite repeats, the overall frequency of MSI-H tumors across all cancer types was estimated to be approximately 4% of 12,019 cases from 32 cancer types by Le et al. A similar MSI-H frequency of 3.7% of 26,464 cases from 43 cancer types was obtained by using MSI-PCR. Studies of microsatellite alterations in Lynch syndrome–associated cancers report differing patterns of MSI, including the size of deletions and number of affected markers. , Blake et al speculated that this varying mutational burden between cancer types could be explained by different time intervals since loss of MMR, with longer intervals resulting in larger deletions in a greater number of microsatellites. This hypothesis is supported by Mandal et al, who studied newly generated dMMR cell lines serially passaged over 1 or 4 months producing differing levels of MSI in the genome. The MSI-H cell line cultured for 4 months exhibited both a higher proportion of unstable microsatellites and higher tumor mutational burden compared with the MSI intermediate cell line cultured for only 1 month. Similar results were observed in studies of 4- and 12-month–old MLH1 -deficient mice, in which single-molecule MSI analysis revealed that deletions in mononucleotide repeats were larger and occurred more frequently in the intestinal cells of the older mice. This hypothesis is further supported by studies showing accumulation of larger mononucleotide repeat deletions in more advanced neoplasms compared with precancerous and early-stage tumors. For example, pediatric constitutional dMMR cancers caused by biallelic germline pathogenic variants in MMR genes exhibit mostly 1 bp deletions. Similarly, precancerous colon polyps often present with smaller changes in microsatellite repeat length compared with more advanced adenocarcinomas. , Another factor influencing the overall size of microsatellite deletions in dMMR tumors is the length of the repeat. Two models have been proposed for accumulation of deletions in mononucleotide repeats: a stepwise model in which only one repeat motif is altered per mutational event, and a two-phase model in which either a single repeat motif or multiple motifs are altered per mutational event. In dMMR tumors, the stepwise model best described mutations observed in short mononucleotide repeats (SMRs), and the two-phase model fit mutations in longer mononucleotide repeats. Similar results have been reported in a dMMR mouse model study in which most mutations in mononucleotide repeats involved losses of single repeat units. At mononucleotide repeats >15 bp, a few cases of deletions involving multiple repeat units were observed and, although rare, indicate that a two-phase mutational process may be operating at longer repeat tracts.
A key advancement in MSI testing was the adoption of marker panels exclusively containing mononucleotide repeats. This change improved the sensitivity and specificity of MSI testing over the Bethesda reference panel, which includes dinucleotide repeats that exhibit lower sensitivity and specificity for detection of dMMR, especially for MSH6 -deficient tumors. , In contrast, a panel of mononucleotide repeats correctly identified 100% of MSH6 -deficient cancers. Another advance in MSI testing is the lack of requirement for use of matching normal samples. This has been achieved with a panel of monomorphic microsatellite markers, using the Idylla (Biocartis, Mechelen, Belgium) PCR-based assay that utilizes high-resolution melt analysis for MSI determination, as well as some of the MSI-NGS systems that do not require the use of matching normal samples. , , , However, sensitivity in non-CRCs may be reduced without a normal sample comparator. , , Non-CRC tumors often exhibit a less pronounced MSI phenotype; that is, they have smaller alterations to microsatellites and can be more challenging to detect by MSI-PCR. , Multiple strategies to improve pan-cancer MSI detection are being explored, including: i) the use of more microsatellite markers, ii) selection of microsatellite markers for specific cancer types, and iii) selection of microsatellite markers with a generalized higher sensitivity across all cancer types. Several groups have used larger marker panels for MSI-PCR testing, with mixed results. Cicek et al used a 10-locus panel consisting of four mononucleotide and six dinucleotide markers and achieved a sensitivity of 97% for detecting MSI-H/dMMR tumors using just the four mononucleotide markers in the panel. Bai et al used a 24-locus panel consisting of six mononucleotide markers and 18 dinucleotide markers; the highest sensitivity of any mononucleotide or dinucleotide marker was 96% and 50%, respectively. In that study, the percent agreement with IHC as the reference standard was 87% for the five markers in the Bethesda panel compared with 56% for the entire 24-locus panel. Thus, increasing the number of markers used for MSI-PCR did not improve detection of MSI and resulted in reduced accuracy of MSI determination. MSI detection using NGS of microsatellite repeats has emerged as an alternative to standard MSI-PCR testing, and the technology lends itself to simultaneous evaluation of a large number of microsatellite markers. , , However, using more markers does not necessarily translate into better results. Typically, the microsatellite markers used in NGS assays are incidentally included because they are in intronic regions of the target gene panels. The use of unselected microsatellite markers not specifically chosen for their ability to detect MSI-H/dMMR results in a wide range of individual marker sensitivities. Other limitations with current NGS technologies are the high error rate for sequencing long homopolymer runs, limitation of low tumor cell content, high cost, and the lack of standardization. , Recent guidelines from the College of American Pathologists conclude there is currently insufficient evidence to support broad-based MSI testing by NGS. The strategy of using large numbers of unselected microsatellite markers for MSI determination can be illustrated by the Memorial Sloan Kettering–Integrated Mutation Profiling of Actionable Cancer Targets NGS platform, which contains >2000 mononucleotide repeats used to assess MSI status in >15,000 tumors from >50 cancer types. A total of 103 Lynch syndrome cases were identified with germline MMR pathogenic variants; 51% were MSI-H, 13% MSI-Indeterminate, and 36% MSS. A follow-up study on 1100 endometrial cancers found 25 cases with germline MMR gene pathogenic variants, of which 83% with MSH6 pathogenic variants and 31% with MLH1 , PMS2 , or MSH2 pathogenic variants were classified as MSS or MSI-Indeterminate. Thus, NGS assays using large numbers of microsatellite markers for MSI determination may not increase sensitivity and can identify a substantial number of MSI-Indeterminate cases of uncertain clinical significance. Use of markers specific to cancer type is another approach that has been investigated to improve detection of dMMR tumors. Long et al examined 9438 tumor-normal exome pairs and 901 whole-genome sequence pairs from 32 different cancer types for MSI by NGS. The MSI status of the top 2000 microsatellite markers most strongly associated with MSI-H status across tissue types were examined. Cancer-specific microsatellite panels of fewer than seven markers were found to be sufficient to attain ≥95% sensitivity and specificity for 11 of 15 cancer types examined. Thus, only a small number of markers were needed to provide accurate detection of MSI in most cancer types. However, marker panels selected for specific cancers were not generally applicable across cancers. The approach taken in development of the LMR MSI Analysis System to increase MSI sensitivity across all cancer types was to use new markers with a generalized higher MSI sensitivity for detection of all dMMR tumors. It has been shown that sequence instability in microsatellites increases exponentially with increasing repeat length. , , , , Based on the observation of increasing instability with increasing homopolymer length, we hypothesized that use of long mononucleotide repeat (LMR) markers for MSI analysis would improve the sensitivity of MSI detection. This hypothesis was confirmed in previous studies using LMR markers. , , In the current study, we significantly expand on our previous work and assessed the accuracy of dMMR detection using the LMR MSI Analysis System on a pan-cancer cohort of Lynch syndrome and sporadic tumors from the Colon Cancer Family Registry (CCFR).
Study Population Patient selection for this study included individuals having any type of cancer exhibiting loss of MMR protein expression by IHC and/or a pathogenic germline variant in MLH1 , MSH2 , MSH6 , or PMS2 genes. In addition, patients with sporadic MSI-H/dMMR CRC were included if tumors exhibited loss of MLH1 expression according to IHC and either MLH1 promoter methylation or absence of germline MMR pathogenic variants. Patients with sporadic MSS/pMMR CRC were included if tumors had normal MMR expression and absence of germline MMR pathogenic variants. Tumor DNA from a total of 469 patients with cancer was obtained from the CCFR, including 149 Lynch syndrome CRCs, 170 non-CRC Lynch syndrome cancers, 71 sporadic MSI-H CRCs, and 79 sporadic MSS CRCs. The number and type of cancer samples in this study are provided in . Data on the MMR protein expression by IHC, MLH1 promoter methylation, BRAF V600E mutations, and germline MMR mutations were provided by the participating CCFR sites (Mount Sinai Hospital, University of Melbourne, and the Mayo Clinic). , , Tumor and patient information from the pathology reports was provided to the CCFR by the treating institutions. All participants gave informed consent for the study, which was approved by the Institutional Review Board at each CCFR site. IHC and Germline MMR DNA Sequencing IHC analysis of MLH1, PMS2, MSH2, and MSH6 protein expression was previously performed on formalin-fixed, paraffin-embedded tumor samples at CCFR centers. The interpretation of IHC slides was performed by a pathologist without knowledge of the tumor MSI status. Germline sequencing of MLH1 , MSH2 , MSH6 , and PMS2 genes was performed on the Lynch syndrome samples used in this study by the participating CCFR institutions as described previously. , , MSI Testing DNA from paired blood and formalin-fixed, paraffin-embedded tumor samples was tested for MSI by using the LMR MSI Analysis System, which contains four SMRs ( NR-21, BAT-25 , BAT-26 , and MONO-27) and four LMRs ( BAT-52 , BAT-56 , BAT-59 , and BAT-60 ) . Confirmation of matching normal/tumor sample pairs was achieved by comparing allelic profiles for the polymorphic LMR markers, replacing the need for use of polymorphic pentanucleotide repeats. The LMR MSI Analysis System results were compared versus those of the Promega pentaplex panel, which consists of five mononucleotide repeat markers ( NR-21 , NR-24 , BAT-25 , BAT-26 , and MONO-27 ) for MSI analysis and two polymorphic pentanucleotide repeat markers ( Penta C and Penta D ) for sample identification. PCR amplification products from both the MSI assays were analyzed with an ABI 3500xL capillary electrophoresis instrument (Applied Biosystems, Foster City, CA) using a 36 cm capillary array and POP-4 polymer, and data were analyzed with GeneMapper software version 6.0 (Thermo Fisher Scientific, Waltham, MA). A microsatellite marker was called unstable if one or more tumor alleles were shifted by at least 2 bp from the germline allele or exhibited other subtle forms of instability described later in this section. The tumor MSI status is based on the number of unstable markers. For both panels, a tumor was designated as MSI-H if two or more mononucleotide markers were unstable and MSS if one or no mononucleotide markers were unstable in the tumor sample. MSI results were reported for tumors in cases in which at least two markers were scored for MSI-H calls and at least four markers were scored for MSS calls using the Promega pentaplex panel. MSI results were reported for the LMR MSI panel when at least two markers were scored for MSI-H calls and at least seven markers were scored for MSS calls. Differences in the extent of MSI among tumors has been investigated because this variation may affect a patient’s response to immunotherapy. The extent, or intensity, of the MSI phenotype for a tumor is reported here as an MSI Intensity Score. MSI Intensity Scores were calculated by using the formula: (1) M S I I n t e n s i t y S c o r e = ∑ L o c u s = N R − 21 L o c u s = B A T - 60 | O b s e r v e d s i z e s h i f t | M a x i m u m o b s e r v e d s i z e s h i f t ÷ N u m b e r l o c i × 100 Locus equals NR-21 , BAT-25 , BAT-26 , MONO-27 , BAT-52 , BAT-56 , BAT-59 , and BAT-60 . Observed size shift is the absolute value of the deletion or insertion in base pairs for a given locus for a given sample. The Maximum observed size shift is the largest size shifts observed for a locus across all samples in this study (ie, NR-21 = 14 bp, BAT-25 = 13 bp, BAT-26 = 15 bp, MONO-27 = 19 bp, BAT-52 = 32 bp, BAT-56 = 41 bp, BAT-59 = 43 bp, BAT-60 = 35 bp). Number loci is the total number of loci in the MSI panel with results for a given sample. If all loci in a sample have size shifts as large as the Maximum observed size shift at each corresponding locus, then it would have an MSI Intensity Score of 100. Interpretation of MSI using the LMR MSI Analysis System has been described previously. Briefly, the allelic pattern of mononucleotide microsatellites in the electropherograms includes multiple peaks due to PCR slippage events in the homopolymer sequences . These artifact peaks are referred to as stutter peaks. The tallest peak of each allele is referred to as the modal peak and represents the true DNA fragment length. New alleles in dMMR tumors result in a multimodal distribution of electropherogram peaks at one or more mononucleotide repeat markers. Most microsatellite alterations in dMMR tumors are deletions of one or more repeat units resulting in a decrease in the PCR fragment length size compared with the normal germline allele. When the shift is less than three to four bases, the shifted tumor peaks may overlap with germline stutter peaks, resulting in a “shoulder” pattern without a new modal peak. Low tumor cellularity in combination with small size shifts can complicate the interpretation of shoulders, and therefore ≥20% neoplastic cell content is generally recommended for robust MSI testing. A marker is called unstable if there is a shift of at least 2 bp (rounding up from 1.5 bp) between the tallest peaks in paired normal and tumor samples, or if the shoulder pattern extends the range of the smallest stutter peak in the tumor sample by at least 2 bp. An expansion of microsatellite length caused by insertion of repeat units, while rare in the Promega pentaplex panel markers and occurring in only a few percent of LMR markers in MSI-H/dMMR tumors, was considered in MSI determinations. The recommended method for MSI analysis is to compare microsatellite profiles of a tumor sample with those of a matching normal sample. However, in some cases, a matching normal sample is not available. To address this issue, Suraweera et al proposed using microsatellite markers that are monomorphic in the population, allowing the use of a standard reference normal sample in place of matching normal samples for MSI analysis. The markers in the Promega pentaplex panel are quasi-monomorphic, and it has been shown that the MSI status of CRC can be accurately determined in most cases without comparisons versus a matching normal sample. , To account for slight variation in allele sizes in a population, the quasi-monomorphic variation range (QMVR) of pooled normal samples is used. , , QMVR values were calculated for each of the SMR markers in the LMR and Promega pentaplex panels by taking the average size of alleles ±2.5 bp from all normal samples in our study cohort. The LMR markers BAT-52 , BAT-56 , BAT-59 , and BAT-60 are polymorphic, and QMVR values are not applicable; modified rules were therefore applied for tumor-only MSI determinations with the LMR panel. All LMR markers with three or more alleles or an “obvious” shoulder pattern (ie, a clear visual difference in the pattern of stutter peaks between matching normal and tumor alleles) were considered unstable. For the X-linked LMR markers BAT-52 and BAT-56 , the presence of two or more alleles in a tumor from male patients was considered unstable. Any SMR marker in the LMR panel with alleles falling outside its respective QMVR size range was scored as unstable, or if an allele exhibited an obvious shoulder pattern. If a tumor exhibited instability in two or more markers it was classified as MSI-H, the same cutoff used when a matching normal sample is included. Receiver-operating characteristic (ROC) curves were generated in R version 4.3 using the ROCR and ggplot2 packages ( https://cran.r-project.org/web/packages/available_packages_by_name.html ) to determine the optimal cutoff for number of unstable markers used for tumor MSI classification. To determine the effect of marker number on MSI assessment, the overall assay sensitivity for panel sizes ranging from 3 to 50 markers was calculated. For each iteration tested, all markers were assigned the same individual sensitivity value ranging from 40% to 90% and a cutoff value for MSI-H classification between 20% and 40% unstable markers. Each marker in a panel was assumed to be an independent Bernoulli trial, and the probability of having at least the number of successes required by the percent unstable marker cutoff to call a patient MSI-H was calculated by using the binomial distribution. The Fisher exact test, t -test, one-way analysis of variance, and Dunn’s method for pairwise comparisons were performed to calculate P values. Sensitivity, specificity, and accuracy of the MSI-PCR tests for determination of tumor MSI status were determined with IHC as the reference standard using formulas given in . The 95% confidence intervals were calculated by using the Clopper-Pearson exact binomial interval. Specimens that were MSI-H according to PCR and dMMR according to IHC were classified as true positives, or false positives if a tumor was MSI-H and pMMR. Specimens that were MSS according to PCR and pMMR according to IHC were classified as true negatives, or false negatives if a tumor was MSS and dMMR. Although MMR IHC was defined as the reference standard for the purposes of this study, the MMR status as determined by IHC may not always be correct because the test is not 100% accurate. , ,
Patient selection for this study included individuals having any type of cancer exhibiting loss of MMR protein expression by IHC and/or a pathogenic germline variant in MLH1 , MSH2 , MSH6 , or PMS2 genes. In addition, patients with sporadic MSI-H/dMMR CRC were included if tumors exhibited loss of MLH1 expression according to IHC and either MLH1 promoter methylation or absence of germline MMR pathogenic variants. Patients with sporadic MSS/pMMR CRC were included if tumors had normal MMR expression and absence of germline MMR pathogenic variants. Tumor DNA from a total of 469 patients with cancer was obtained from the CCFR, including 149 Lynch syndrome CRCs, 170 non-CRC Lynch syndrome cancers, 71 sporadic MSI-H CRCs, and 79 sporadic MSS CRCs. The number and type of cancer samples in this study are provided in . Data on the MMR protein expression by IHC, MLH1 promoter methylation, BRAF V600E mutations, and germline MMR mutations were provided by the participating CCFR sites (Mount Sinai Hospital, University of Melbourne, and the Mayo Clinic). , , Tumor and patient information from the pathology reports was provided to the CCFR by the treating institutions. All participants gave informed consent for the study, which was approved by the Institutional Review Board at each CCFR site.
IHC analysis of MLH1, PMS2, MSH2, and MSH6 protein expression was previously performed on formalin-fixed, paraffin-embedded tumor samples at CCFR centers. The interpretation of IHC slides was performed by a pathologist without knowledge of the tumor MSI status. Germline sequencing of MLH1 , MSH2 , MSH6 , and PMS2 genes was performed on the Lynch syndrome samples used in this study by the participating CCFR institutions as described previously. , ,
DNA from paired blood and formalin-fixed, paraffin-embedded tumor samples was tested for MSI by using the LMR MSI Analysis System, which contains four SMRs ( NR-21, BAT-25 , BAT-26 , and MONO-27) and four LMRs ( BAT-52 , BAT-56 , BAT-59 , and BAT-60 ) . Confirmation of matching normal/tumor sample pairs was achieved by comparing allelic profiles for the polymorphic LMR markers, replacing the need for use of polymorphic pentanucleotide repeats. The LMR MSI Analysis System results were compared versus those of the Promega pentaplex panel, which consists of five mononucleotide repeat markers ( NR-21 , NR-24 , BAT-25 , BAT-26 , and MONO-27 ) for MSI analysis and two polymorphic pentanucleotide repeat markers ( Penta C and Penta D ) for sample identification. PCR amplification products from both the MSI assays were analyzed with an ABI 3500xL capillary electrophoresis instrument (Applied Biosystems, Foster City, CA) using a 36 cm capillary array and POP-4 polymer, and data were analyzed with GeneMapper software version 6.0 (Thermo Fisher Scientific, Waltham, MA). A microsatellite marker was called unstable if one or more tumor alleles were shifted by at least 2 bp from the germline allele or exhibited other subtle forms of instability described later in this section. The tumor MSI status is based on the number of unstable markers. For both panels, a tumor was designated as MSI-H if two or more mononucleotide markers were unstable and MSS if one or no mononucleotide markers were unstable in the tumor sample. MSI results were reported for tumors in cases in which at least two markers were scored for MSI-H calls and at least four markers were scored for MSS calls using the Promega pentaplex panel. MSI results were reported for the LMR MSI panel when at least two markers were scored for MSI-H calls and at least seven markers were scored for MSS calls. Differences in the extent of MSI among tumors has been investigated because this variation may affect a patient’s response to immunotherapy. The extent, or intensity, of the MSI phenotype for a tumor is reported here as an MSI Intensity Score. MSI Intensity Scores were calculated by using the formula: (1) M S I I n t e n s i t y S c o r e = ∑ L o c u s = N R − 21 L o c u s = B A T - 60 | O b s e r v e d s i z e s h i f t | M a x i m u m o b s e r v e d s i z e s h i f t ÷ N u m b e r l o c i × 100 Locus equals NR-21 , BAT-25 , BAT-26 , MONO-27 , BAT-52 , BAT-56 , BAT-59 , and BAT-60 . Observed size shift is the absolute value of the deletion or insertion in base pairs for a given locus for a given sample. The Maximum observed size shift is the largest size shifts observed for a locus across all samples in this study (ie, NR-21 = 14 bp, BAT-25 = 13 bp, BAT-26 = 15 bp, MONO-27 = 19 bp, BAT-52 = 32 bp, BAT-56 = 41 bp, BAT-59 = 43 bp, BAT-60 = 35 bp). Number loci is the total number of loci in the MSI panel with results for a given sample. If all loci in a sample have size shifts as large as the Maximum observed size shift at each corresponding locus, then it would have an MSI Intensity Score of 100. Interpretation of MSI using the LMR MSI Analysis System has been described previously. Briefly, the allelic pattern of mononucleotide microsatellites in the electropherograms includes multiple peaks due to PCR slippage events in the homopolymer sequences . These artifact peaks are referred to as stutter peaks. The tallest peak of each allele is referred to as the modal peak and represents the true DNA fragment length. New alleles in dMMR tumors result in a multimodal distribution of electropherogram peaks at one or more mononucleotide repeat markers. Most microsatellite alterations in dMMR tumors are deletions of one or more repeat units resulting in a decrease in the PCR fragment length size compared with the normal germline allele. When the shift is less than three to four bases, the shifted tumor peaks may overlap with germline stutter peaks, resulting in a “shoulder” pattern without a new modal peak. Low tumor cellularity in combination with small size shifts can complicate the interpretation of shoulders, and therefore ≥20% neoplastic cell content is generally recommended for robust MSI testing. A marker is called unstable if there is a shift of at least 2 bp (rounding up from 1.5 bp) between the tallest peaks in paired normal and tumor samples, or if the shoulder pattern extends the range of the smallest stutter peak in the tumor sample by at least 2 bp. An expansion of microsatellite length caused by insertion of repeat units, while rare in the Promega pentaplex panel markers and occurring in only a few percent of LMR markers in MSI-H/dMMR tumors, was considered in MSI determinations. The recommended method for MSI analysis is to compare microsatellite profiles of a tumor sample with those of a matching normal sample. However, in some cases, a matching normal sample is not available. To address this issue, Suraweera et al proposed using microsatellite markers that are monomorphic in the population, allowing the use of a standard reference normal sample in place of matching normal samples for MSI analysis. The markers in the Promega pentaplex panel are quasi-monomorphic, and it has been shown that the MSI status of CRC can be accurately determined in most cases without comparisons versus a matching normal sample. , To account for slight variation in allele sizes in a population, the quasi-monomorphic variation range (QMVR) of pooled normal samples is used. , , QMVR values were calculated for each of the SMR markers in the LMR and Promega pentaplex panels by taking the average size of alleles ±2.5 bp from all normal samples in our study cohort. The LMR markers BAT-52 , BAT-56 , BAT-59 , and BAT-60 are polymorphic, and QMVR values are not applicable; modified rules were therefore applied for tumor-only MSI determinations with the LMR panel. All LMR markers with three or more alleles or an “obvious” shoulder pattern (ie, a clear visual difference in the pattern of stutter peaks between matching normal and tumor alleles) were considered unstable. For the X-linked LMR markers BAT-52 and BAT-56 , the presence of two or more alleles in a tumor from male patients was considered unstable. Any SMR marker in the LMR panel with alleles falling outside its respective QMVR size range was scored as unstable, or if an allele exhibited an obvious shoulder pattern. If a tumor exhibited instability in two or more markers it was classified as MSI-H, the same cutoff used when a matching normal sample is included. Receiver-operating characteristic (ROC) curves were generated in R version 4.3 using the ROCR and ggplot2 packages ( https://cran.r-project.org/web/packages/available_packages_by_name.html ) to determine the optimal cutoff for number of unstable markers used for tumor MSI classification. To determine the effect of marker number on MSI assessment, the overall assay sensitivity for panel sizes ranging from 3 to 50 markers was calculated. For each iteration tested, all markers were assigned the same individual sensitivity value ranging from 40% to 90% and a cutoff value for MSI-H classification between 20% and 40% unstable markers. Each marker in a panel was assumed to be an independent Bernoulli trial, and the probability of having at least the number of successes required by the percent unstable marker cutoff to call a patient MSI-H was calculated by using the binomial distribution. The Fisher exact test, t -test, one-way analysis of variance, and Dunn’s method for pairwise comparisons were performed to calculate P values. Sensitivity, specificity, and accuracy of the MSI-PCR tests for determination of tumor MSI status were determined with IHC as the reference standard using formulas given in . The 95% confidence intervals were calculated by using the Clopper-Pearson exact binomial interval. Specimens that were MSI-H according to PCR and dMMR according to IHC were classified as true positives, or false positives if a tumor was MSI-H and pMMR. Specimens that were MSS according to PCR and pMMR according to IHC were classified as true negatives, or false negatives if a tumor was MSS and dMMR. Although MMR IHC was defined as the reference standard for the purposes of this study, the MMR status as determined by IHC may not always be correct because the test is not 100% accurate. , ,
Characterization of the Study Cohort A total of 469 DNA samples from 20 different cancer types were obtained from the CCFR; they included 319 (149 CRC and 170 non-CRC) cancers from patients with Lynch syndrome and 150 sporadic or nonheritable CRC cases. The characteristics of the study population are summarized in . The average age at diagnosis for Lynch syndrome MSI-H/dMMR CRC was 47.3 years, 51.7 years for sporadic MSS/pMMR CRC, and 59.2 years for sporadic MSI-H/dMMR CRC (Lynch versus sporadic MSS, P = 0.037; Lynch versus sporadic MSI-H, P < 0.001; sporadic MSS versus sporadic MSI-H, P < 0.001). The age at diagnosis for individuals with various MMR gene deficiencies across all cancer types was not significantly different ( P = 0.218). There were 188 male subjects and 281 female subjects included in the study. Lynch syndrome colon tumors were more often right-sided (ie, proximal to the splenic flexure, excluding rectum) [77% (98 of 127); P < 0.001], which is consistent with previous studies. MMR protein expression by IHC was available for 463 of the 469 study cases. Of the dMMR cancers, 161 displayed loss of expression of MLH1 or MLH1/PMS2, 136 loss of MSH2 or MSH2/MSH6, 31 loss of MSH6 only, and 24 loss of PMS2 only. Germline sequencing data on the MMR genes were available for 467 of 469 samples. Of the 319 cases classified as Lynch syndrome by the CCFR, germline MMR pathogenic variants were found in 150 MSH2 , 103 MLH1 , 39 MSH6, 26 PMS2 , and 1 EPCAM . As previously reported, MLH1 promoter methylation [1.7% (2 of 118)] and BRAF V600E mutations [2.5% (5 of 202)] were uncommon across all MSI-H/dMMR Lynch syndrome cancers as well in sporadic MSS/pMMR CRC [2.5% (1 of 40) and 5.6% (4 of 72), respectively]. In contrast, 57.2% (36 of 63) of sporadic MSI-H/dMMR CRCs tested had MLH1 promoter methylation and 60.2% (41 of 68) a BRAF V600E mutation. Performance of the LMR MSI Analysis System To assess the performance of the LMR MSI Analysis System, comparisons were made with the current standard MSI and IHC tests. For MSI analysis, the Promega pentaplex panel was used as the standard for this study. The LMR MSI Analysis System is a newly developed pan-cancer test for MSI that contains four of the five SMR markers contained in the Promega pentaplex panel plus four LMRs for improved pan-cancer MSI detection . The microsatellite markers in both panels consist of adenine mononucleotide repeats ranging from 21 to 27 repeats in the SMR markers and 52 to 60 repeats in the LMR markers. LMR markers are polymorphic in the population, however, and the number of repeats at a given locus can vary among individuals. Amplified PCR products generated with the MSI kits were sized by using capillary electrophoresis and the data analyzed using GeneMapper software to determine the size differences between the paired normal and tumor samples. Interpretation of electropherograms is illustrated in . Representative electropherograms of MLH1-deficient colon and endometrial cancer specimens using the LMR panel are shown in and and MSH6 -deficient colon and endometrial cancers in and . The tumor MSI status using the LMR and the Promega pentaplex panels and the MMR status using IHC across all cancers is summarized in and detailed in . The accuracy (using IHC as the reference standard) across all cancers was significantly higher with the LMR panel than with the Promega pentaplex panel (96.5% versus 92.6%; P = 0.009) . The increased accuracy of the LMR MSI panel was primarily due to the higher sensitivity of LMR compared with the Promega pentaplex panel (97.5% versus 91.8%; P < 0.001), as there was no statistically significant difference in specificity (93.5% versus 95.34%; P = 0.7678). In the CRC cohort, the sensitivity of the LMR and the Promega pentaplex panels was 98.6% and 96.8%, and the specificity was 98.8% and 98.8%, respectively. Neither sensitivity nor specificity was significantly different for CRC ( P = 0.338 and P = 1.000). In the non-CRC tumors, sensitivity was greater with LMR (95.7% versus 83.9%; P = 0.001), but there was no significant difference in specificity (76.9% versus 84.6%; P = 0.727). The lower-than-expected specificity value for non-CRC samples is likely due to the small number of pMMR samples ( n = 26) tested in this study, including six with normal MMR IHC results that had germline MMR pathogenic variants. Discordant samples from the inter-test MSI comparisons in are described in (between MSI-PCR and IHC) and (between LMR and the Promega pentaplex panels). Next, all cancer types were examined to determine whether the discrepancy in accuracy between panels was related to cancer types. The overall percent agreement between the LMR and the Promega pentaplex panels for CRC was 98.7% and 89.3% for non-CRC tumors ( P < 0.001), indicating that most of the discrepancy was related to the non-CRC tumors. Indeed, for CRC tumors, there was no significant difference in the accuracy of the LMR and Promega pentaplex panels (98.7% versus 97.3%; P = 0.383) compared with IHC. However, in the non-CRC cohort, the accuracy was significantly higher for the LMR panel than for the Promega pentaplex panel (92.7% versus 84.0%; P = 0.016). For endometrial cancers specifically, which have the highest percentage of MSI-H cases by cancer type, the sensitivity of the LMR panel for detection of dMMR tumors was 95.9% (47 of 49). In addition, there was a pMMR endometrial cancer sample by IHC that was MSI-H and had a germline pathogenic variant in MSH6. Overall, the LMR panel showed significantly greater accuracy for detection of non-CRC tumors and equivalent accuracy in CRC tumors compared with the Promega pentaplex panel. Although MSI and MMR IHC results are typically highly concordant, in cases in which differences arise, a third orthogonal test (typically MMR gene sequencing) may help to resolve the differences. In this study, there were nine MSS/dMMR and five MSI-H/pMMR discordant cases between LMR and IHC . Of these, seven of nine MSS/dMMR cases had germline MMR pathogenic variants, indicating these may be false-negative MSI results, and five of five MSI-H/pMMR cases had germline MMR pathogenic variants, indicating these may be false-negative IHC results. For the Promega pentaplex panel discordant cases, 25 of 29 MSS/dMMR cases had germline MMR pathogenic variants, indicating these may be false-negative MSI results, and all four of the MSI-H/pMMR cases had germline MMR pathogenic variants, indicating these may be false-negative IHC results. The result from orthogonal testing indicates that dual testing of MSI and IHC would yield the greatest overall accuracy. Discordant cases between the LMR and the Promega pentaplex panels are shown in . There were 21 discordant cases involving all MMR genes (with slightly more occurring in MSH6 , which is consistent with the occurrence of milder MSI phenotypes in tumors with MSH6 loss). The major characteristic associated with discordant cases is that most are non-CRCs, and this is illustrated by the difference in sensitivity between the LMR and the Promega pentaplex panels for non-CRC . There was also a significant difference in the average MSI Intensity Scores between the discordant cases compared with all dMMR cases (9.1 versus 38.56; P < 0.001). Lower scores are expected for non-CRCs, which typically have fewer unstable markers and exhibit smaller size shifts, and this translates into lower MSI Intensity Scores. Characterization of Individual Markers Microsatellite markers are not equally sensitive and specific for detection of dMMR and pMMR tumors, and considerable effort has gone into identifying the best markers for MSI testing. , , The relative performance of the individual markers in the LMR MSI Analysis System was assessed to determine how often each marker was stable or unstable in pMMR and dMMR tumors, and what was the magnitude of the change. Other marker characteristics, including allele frequencies, percent heterozygosity, and inter-assay variation between SMR markers in the LMR and the Promega pentaplex panels, were also assessed. An evaluation of the sensitivity and specificity of the individual markers within the LMR panel is summarized in . As a group, the average sensitivity for the four LMR markers was significantly higher than that for the four SMR markers in non-CRC tumors (88.3 versus 73.7; P < 0.001) but not in CRC (96.4 versus 95.0; P = 0.191). The average specificity of the LMR and SMR markers was significantly different for CRC (95.7 versus 98.5; P = 0.038) but not for non-CRC (81.7 versus 88.5; P = 0.242). The LMR markers BAT-52 and BAT-56 are located on the X chromosome, and therefore male subjects always appear homozygous at these loci. Because there is only a single copy of BAT-52 and BAT-56 in male cells that can potentially be mutated, the question arises as to whether these markers may be less sensitive to MSI in tumors occurring in male subjects compared with female subjects. For any given LMR marker, there were no significant difference in the sensitivity between male subjects and female subjects ( P > 0.1) . In summary, the sensitivity of all four LMR markers was higher than the best SMR marker for non-CRC, but for CRC, all markers exhibited comparably high levels of sensitivity. The size of insertion/deletion mutations in microsatellite sequences varied widely among markers . The average size shifts were larger for all four LMR markers compared with the SMR markers (14.9 bp versus 5.8 bp; P < 0.001). Most mutations were deletions, but there were two insertions in 318 MSI-H tumors detected with the Promega pentaplex panel markers and 23 insertions in 349 MSI-H tumors detected with the LMR panel markers (21 in LMRs and 2 in SMRs). A minimum size shift of 2 bp (rounded up from ≥1.5 bp) was required to classify a marker as unstable. Size shifts <1.5 bp were commonly observed in MSI-H/dMMR tumors, but they were also observed in 80% (85 of 106) of pMMR tumor in one or more markers. Therefore, markers with <1.5 bp shifts cannot reliably be considered unstable. The average marker size shift varied widely among cancer types, with small shift sizes commonly observed in non-gastrointestinal tract cancers such as endometrial, breast, and prostate . The allele frequency and percent heterozygosity for the markers in the LMR MSI Analysis System were determined for the 469 matching normal samples in the study cohort ( and ). The SMR markers all exhibited very low levels of variability as measured by total number of alleles (five to nine alleles per marker) and percent heterozygosity (0% to 3% per marker), in agreement with previous reports. In contrast, the four LMR markers exhibited a broad range of allele sizes (on average 40 alleles per marker) and much higher heterozygosity levels. This difference between SMR and LMR loci is largely attributable to the near monomorphic nature of the SMR loci, which have a common germline allele as opposed to the LMR loci, which are polymorphic and do not have a common germline allele. Heterozygosity for the autosomal LMR markers BAT-59 and BAT-60 was around 70% and lower for X-linked markers BAT-52 and BAT-56 as expected because these markers can only be heterozygous in XX female subjects. In MSI-H/dMMR tumors, mutant alleles seem to have been created predominantly by small deletions in the germline microsatellite allele. This is illustrated by the distribution of mutant alleles in non-gastrointestinal MSI-H/dMMR tumors, in which the modal peak for the most common size shift is approximately 2 bp . In contrast, CRCs exhibited right shifted modal peaks with deletions >2 bp, presumably due to the accumulation of multiple mutational events from a high number of replication cycles after loss of MMR function in rapidly dividing colon cells. This pattern is consistent with the stepwise deletion model in which a single repeat unit is altered per mutational event, and larger deletions are created through accumulation of multiple smaller events moving toward relative stability at a minimum repeat number of approximately 15 bp. The minimum estimated number of repeats observed in SMR loci ranged from 7 to 12 repeats, as opposed to 15 to 25 repeats for the LMR loci, which require more stepwise mutational events before reaching a minimum repeat number. Deletions in LMR loci involving loss of multiple repeat units in a single event may have also occurred but at a lower frequency. This is shown by the pattern of spontaneous mutant alleles in the four LMR loci from 107 MSS/pMMR tumors in which 1 repeat unit deletions accounted for 92% (54 of 59) of mutational events and larger deletions of 9 to 26 repeat units occurred in only 8%. Interassay comparison of SMR marker alterations contained in both the LMR MSI Analysis System and the Promega pentaplex panel in MSI-H cancers is shown in . Discordant alterations in NR-21 , BAT-25 , BAT-26 , and MONO-27 markers were observed in 4.3% (80 of 1856) of marker comparisons. Using only these four markers for MSI classification, the discordant calls changed the overall tumor MSI status determination in 1.1% (5 of 464) of cases. Differences in the MSI calls between panels were often due to small 1 to 2 bp changes in the size of deletions or the presence or absence of a subtle shoulder pattern that resulted in a change in marker classification. Amplicon sizes for the SMR loci are smaller in the LMR panel and therefore tend to amplify DNA from formalin-fixed, paraffin-embedded samples more robustly, which may account for some of the observed variation. Cutoffs and Optimal Number of Markers for MSI Classification The Bethesda guidelines established a reference set of five microsatellite markers (mononucleotide and dinucleotide repeats) and a method for classifying MSI status based on the number of unstable markers observed in a tumor. These guidelines were later revised, and a panel of five mononucleotide repeat markers replaced the original panel, although the total number of markers and the cutoffs remains the same. Guidelines for MSI tumor classification using the LMR MSI Analysis System have been provided by Promega. A tumor is considered as MSI-H if two or more markers are unstable and MSS if 0 or 1 marker is unstable. The guidelines do not provide cutoff criteria for classification of an MSI-Low group. In contrast, a recent publication on the validation of the LMR MSI kit used a different cutoff of three or more unstable markers as the MSI-H and one or two unstable markers as MSI-Low. ROC analysis was therefore performed to determine the optimal cutoff value for the accurate classification of pan-cancer MSI status . The optimal cutoff for MSI-H tumor classification based on ROC analysis was two or more unstable markers with an AUC value of 0.949. There were seven samples with two unstable markers of the eight markers tested. All but one exhibited loss of MMR expression by IHC and had germline MMR pathogenic variants. There were 14 samples with 3 unstable markers of 8 markers tested. All these samples showed loss of MMR expression by IHC, and 11 had germline MMR pathogenic variants. Thus, the cutoff of two or more unstable markers for classification of MSI-H tumors determined by ROC analysis is supported by both IHC and sequencing data. The average percentage of unstable LMR panel markers in CRC and non-CRC samples is shown in . More than 93% of MSI-H/dMMR CRC exhibited instability in at least seven of eight markers compared with 61% in non-CRC samples ( P = 0.013). The number of unstable markers per dMMR tumor was more evenly distributed in non-CRC samples. This raised the question of whether the use of more markers would increase detection of MSI for non-CRC. In silico analysis for the overall sensitivity of panels of markers for MSI detection with varying levels of marker sensitivity ranging from 40% to 90% was calculated by using panels ranging in size from 3 to 50 markers with cutoffs of 20% to 40% unstable markers . These results indicate that increasing the number of markers in the LMR panel, which in this study have a sensitivity of 98.6% for CRC and 95.7% for non-CRC samples with a cutoff of 25% unstable markers, would not have significantly improved overall sensitivity for MSI detection. Differences in MSI Intensity The extent or intensity of MSI can vary between dMMR tumors in terms of the percent unstable microsatellite markers and in the number of repeat units inserted or deleted. Differences in the intensity of MSI among tumors was investigated as this variation may be important for predicting a patient’s response to immunotherapy. Different cancer types exhibited different levels of MSI as assessed by their MSI Intensity Scores . Cancers of the gastrointestinal tract, including the stomach, small intestine, colon, and rectum, had significantly higher MSI Intensity Scores than non-gastrointestinal cancer types ( P < 0.001). Furthermore, wide variation in MSI Intensity Scores among tumors of the same cancer type was observed. This variation in MSI intensity may be due in part to differences in the size of a tumor because MSI is believed to be a progressive phenomenon that increases in intensity over time after loss of MMR as the tumor develops. Tumor size of MSI-H/dMMR colon cancers was found to be positively associated with MSI intensity ( P < 0.001). Next, the effect of deficiencies in the four MMR genes on MSI intensity as well as the epigenetic loss of MLH1 in sporadic CRCs were assessed. MSI Intensity Scores for CRCs that were MSI-H with the LMR MSI Analysis System and dMMR with IHC were determined . Sporadic MSI-H/dMMR CRCs with epigenetic loss of MLH1 had the highest average MSI Intensity Score, followed in decreasing order by MMR gene deficiencies in MLH1 , MSH2, PMS2 , and MSH6 (sporadic MSI-H, MLH1 and MSH2 scores were significantly different from MSH6 ; P < 0.001). There was only one case in the study with an EPCAM pathogenic variant. This case was MSI-H, exhibited loss of MSH2 expression by IHC, and had an MSI Intensity Score of 49.2, which is very close to the mean score for cases with germline MSH2 pathogenic variants. In addition to the gene effects, there was variation in MSI intensity between cancer types with the same MMR gene deficiency. Finally, the association between MSI intensity and tumor immune response was examined by assessing the presence of tumor-infiltrating lymphocytes (TILs). A high level of TILs in the tumor indicates that the body has initiated an immune response against the tumor. MSI-H CRC exhibited a high level of TILs (≥1:10 lymphocyte to epithelial nuclei ratio) in 72.9% (124 of 170) of tumors compared with 3.9% (3 of 76) in MSS CRC ( P < 0.001). Overall, MSI Intensity Scores were found to vary between cancer types, among cancers of the same type, by tumor size, by MMR gene deficiency, and between pMMR and dMMR tumors. Further investigation into the utility of MSI intensity as a biomarker for personalizing disease management is needed to confirm these observations. MSI Testing with and without Matching Normal Sample The Bethesda guidelines for MSI testing recommend testing paired normal and tumor samples because some of the markers in the original Bethesda panel are polymorphic, and to identify a new allele in the tumor sample the germline genotype must be known. The markers NR-21, NR-24, BAT-25, BAT-26, and MONO-27 in the Promega pentaplex panel are quasi-monomorphic, meaning that most individuals in the population have the same size allele, and therefore a common population reference standard can be used in place of a paired normal sample in most cases. , For example, a panel of quasi-monomorphic mononucleotide-repeat markers ( NR-21, NR-24, BAT-25, BAT-26, MONO-27 ) (FALCO biosystems, Kyoto, Japan) was approved as a companion diagnostic for pembrolizumab for the treatment of MSI-H solid tumors without requiring the use of matching normal samples. In contrast, the four LMR markers in the LMR MSI Analysis System are polymorphic and require matching normal samples to achieve the most accurate MSI test results ( and ). Matching normal samples are not always available; therefore, the effectiveness of testing only the tumor samples with the LMR MSI kit was investigated. The analysis criteria used for MSI testing of only tumor samples is described in detail in . Briefly, the average normal allele size and QMVR values were calculated for the four SMR markers in the LMR panel using the matching normal samples from the CCFR cohort . QMVR values were not calculated for the four LMR markers because they are polymorphic. The SMR markers were scored as unstable if a tumor allele was outside the respective normal QMVR size range or exhibited an obvious shoulder pattern. For the LMR markers, which lacked QMVR values, tumors with three or more alleles per marker or an obvious shoulder pattern were scored as unstable. In male subjects, X-linked hemizygous markers BAT-52 and BAT-56 were scored as unstable when two alleles for a given marker were observed. For the LMR MSI Analysis System, the percent agreement for determining tumor MSI status with and without matching normal sample was 97.6% for CRC samples and 90.9% for non-CRC samples ( P = 0.002) . For CRC samples, there were five false-negatives and two false-positive samples of a total of 288 tumors. For non-CRC samples, there were 15 false-negative findings and no false-positive findings of a total of 164 tumors. False-negative results were mainly due to small allele size shifts resulting in new alleles that were still within the normal QMVR range or subtle shoulders that could not reliably be called without a matching normal sample. The two false-positive sample calls were due to germline heterozygosity in the SMR marker in which one allele fell outside of the normal QMVR size range. Overall, the data indicate that testing on samples using the LMR MSI Analysis System is feasible if matching normal sample is not available, but sensitivity may be reduced, especially for non-CRC samples. The ability to correctly identify MSI status using only tumor samples was also assessed for the Promega pentaplex panel and compared versus the four SMR markers ( NR-21, BAT-25, BAT-26 , and MONO-27 ) also contained in the LMR panel . For the Promega pentaplex panel, the percent agreement for determining tumor MSI status with and without matching normal sample was 99% for CRC samples and 86.8% for non-CRC samples ( P < 0.001). Similar results were observed using just the four SMR markers from the LMR panel (for CRC and non-CRC, the percent agreement was 98% and 82.8%; P < 0.001). There were no significant differences in MSI calls using the SMR markers versus the Promega pentaplex panel for either CRC or non-CRC [CRC agreement was 98% versus 99% ( P = 0.504); non-CRC agreement was 82.8% versus 86.8% ( P = 0.363)]. Thus, MSI testing with the Promega pentaplex panel using only CRC tumor samples resulted in a nonsignificant 1% loss of sensitivity compared with tests using both tumor and normal samples, whereas there was a 13.2% loss for non-CRC samples.
A total of 469 DNA samples from 20 different cancer types were obtained from the CCFR; they included 319 (149 CRC and 170 non-CRC) cancers from patients with Lynch syndrome and 150 sporadic or nonheritable CRC cases. The characteristics of the study population are summarized in . The average age at diagnosis for Lynch syndrome MSI-H/dMMR CRC was 47.3 years, 51.7 years for sporadic MSS/pMMR CRC, and 59.2 years for sporadic MSI-H/dMMR CRC (Lynch versus sporadic MSS, P = 0.037; Lynch versus sporadic MSI-H, P < 0.001; sporadic MSS versus sporadic MSI-H, P < 0.001). The age at diagnosis for individuals with various MMR gene deficiencies across all cancer types was not significantly different ( P = 0.218). There were 188 male subjects and 281 female subjects included in the study. Lynch syndrome colon tumors were more often right-sided (ie, proximal to the splenic flexure, excluding rectum) [77% (98 of 127); P < 0.001], which is consistent with previous studies. MMR protein expression by IHC was available for 463 of the 469 study cases. Of the dMMR cancers, 161 displayed loss of expression of MLH1 or MLH1/PMS2, 136 loss of MSH2 or MSH2/MSH6, 31 loss of MSH6 only, and 24 loss of PMS2 only. Germline sequencing data on the MMR genes were available for 467 of 469 samples. Of the 319 cases classified as Lynch syndrome by the CCFR, germline MMR pathogenic variants were found in 150 MSH2 , 103 MLH1 , 39 MSH6, 26 PMS2 , and 1 EPCAM . As previously reported, MLH1 promoter methylation [1.7% (2 of 118)] and BRAF V600E mutations [2.5% (5 of 202)] were uncommon across all MSI-H/dMMR Lynch syndrome cancers as well in sporadic MSS/pMMR CRC [2.5% (1 of 40) and 5.6% (4 of 72), respectively]. In contrast, 57.2% (36 of 63) of sporadic MSI-H/dMMR CRCs tested had MLH1 promoter methylation and 60.2% (41 of 68) a BRAF V600E mutation.
To assess the performance of the LMR MSI Analysis System, comparisons were made with the current standard MSI and IHC tests. For MSI analysis, the Promega pentaplex panel was used as the standard for this study. The LMR MSI Analysis System is a newly developed pan-cancer test for MSI that contains four of the five SMR markers contained in the Promega pentaplex panel plus four LMRs for improved pan-cancer MSI detection . The microsatellite markers in both panels consist of adenine mononucleotide repeats ranging from 21 to 27 repeats in the SMR markers and 52 to 60 repeats in the LMR markers. LMR markers are polymorphic in the population, however, and the number of repeats at a given locus can vary among individuals. Amplified PCR products generated with the MSI kits were sized by using capillary electrophoresis and the data analyzed using GeneMapper software to determine the size differences between the paired normal and tumor samples. Interpretation of electropherograms is illustrated in . Representative electropherograms of MLH1-deficient colon and endometrial cancer specimens using the LMR panel are shown in and and MSH6 -deficient colon and endometrial cancers in and . The tumor MSI status using the LMR and the Promega pentaplex panels and the MMR status using IHC across all cancers is summarized in and detailed in . The accuracy (using IHC as the reference standard) across all cancers was significantly higher with the LMR panel than with the Promega pentaplex panel (96.5% versus 92.6%; P = 0.009) . The increased accuracy of the LMR MSI panel was primarily due to the higher sensitivity of LMR compared with the Promega pentaplex panel (97.5% versus 91.8%; P < 0.001), as there was no statistically significant difference in specificity (93.5% versus 95.34%; P = 0.7678). In the CRC cohort, the sensitivity of the LMR and the Promega pentaplex panels was 98.6% and 96.8%, and the specificity was 98.8% and 98.8%, respectively. Neither sensitivity nor specificity was significantly different for CRC ( P = 0.338 and P = 1.000). In the non-CRC tumors, sensitivity was greater with LMR (95.7% versus 83.9%; P = 0.001), but there was no significant difference in specificity (76.9% versus 84.6%; P = 0.727). The lower-than-expected specificity value for non-CRC samples is likely due to the small number of pMMR samples ( n = 26) tested in this study, including six with normal MMR IHC results that had germline MMR pathogenic variants. Discordant samples from the inter-test MSI comparisons in are described in (between MSI-PCR and IHC) and (between LMR and the Promega pentaplex panels). Next, all cancer types were examined to determine whether the discrepancy in accuracy between panels was related to cancer types. The overall percent agreement between the LMR and the Promega pentaplex panels for CRC was 98.7% and 89.3% for non-CRC tumors ( P < 0.001), indicating that most of the discrepancy was related to the non-CRC tumors. Indeed, for CRC tumors, there was no significant difference in the accuracy of the LMR and Promega pentaplex panels (98.7% versus 97.3%; P = 0.383) compared with IHC. However, in the non-CRC cohort, the accuracy was significantly higher for the LMR panel than for the Promega pentaplex panel (92.7% versus 84.0%; P = 0.016). For endometrial cancers specifically, which have the highest percentage of MSI-H cases by cancer type, the sensitivity of the LMR panel for detection of dMMR tumors was 95.9% (47 of 49). In addition, there was a pMMR endometrial cancer sample by IHC that was MSI-H and had a germline pathogenic variant in MSH6. Overall, the LMR panel showed significantly greater accuracy for detection of non-CRC tumors and equivalent accuracy in CRC tumors compared with the Promega pentaplex panel. Although MSI and MMR IHC results are typically highly concordant, in cases in which differences arise, a third orthogonal test (typically MMR gene sequencing) may help to resolve the differences. In this study, there were nine MSS/dMMR and five MSI-H/pMMR discordant cases between LMR and IHC . Of these, seven of nine MSS/dMMR cases had germline MMR pathogenic variants, indicating these may be false-negative MSI results, and five of five MSI-H/pMMR cases had germline MMR pathogenic variants, indicating these may be false-negative IHC results. For the Promega pentaplex panel discordant cases, 25 of 29 MSS/dMMR cases had germline MMR pathogenic variants, indicating these may be false-negative MSI results, and all four of the MSI-H/pMMR cases had germline MMR pathogenic variants, indicating these may be false-negative IHC results. The result from orthogonal testing indicates that dual testing of MSI and IHC would yield the greatest overall accuracy. Discordant cases between the LMR and the Promega pentaplex panels are shown in . There were 21 discordant cases involving all MMR genes (with slightly more occurring in MSH6 , which is consistent with the occurrence of milder MSI phenotypes in tumors with MSH6 loss). The major characteristic associated with discordant cases is that most are non-CRCs, and this is illustrated by the difference in sensitivity between the LMR and the Promega pentaplex panels for non-CRC . There was also a significant difference in the average MSI Intensity Scores between the discordant cases compared with all dMMR cases (9.1 versus 38.56; P < 0.001). Lower scores are expected for non-CRCs, which typically have fewer unstable markers and exhibit smaller size shifts, and this translates into lower MSI Intensity Scores.
Microsatellite markers are not equally sensitive and specific for detection of dMMR and pMMR tumors, and considerable effort has gone into identifying the best markers for MSI testing. , , The relative performance of the individual markers in the LMR MSI Analysis System was assessed to determine how often each marker was stable or unstable in pMMR and dMMR tumors, and what was the magnitude of the change. Other marker characteristics, including allele frequencies, percent heterozygosity, and inter-assay variation between SMR markers in the LMR and the Promega pentaplex panels, were also assessed. An evaluation of the sensitivity and specificity of the individual markers within the LMR panel is summarized in . As a group, the average sensitivity for the four LMR markers was significantly higher than that for the four SMR markers in non-CRC tumors (88.3 versus 73.7; P < 0.001) but not in CRC (96.4 versus 95.0; P = 0.191). The average specificity of the LMR and SMR markers was significantly different for CRC (95.7 versus 98.5; P = 0.038) but not for non-CRC (81.7 versus 88.5; P = 0.242). The LMR markers BAT-52 and BAT-56 are located on the X chromosome, and therefore male subjects always appear homozygous at these loci. Because there is only a single copy of BAT-52 and BAT-56 in male cells that can potentially be mutated, the question arises as to whether these markers may be less sensitive to MSI in tumors occurring in male subjects compared with female subjects. For any given LMR marker, there were no significant difference in the sensitivity between male subjects and female subjects ( P > 0.1) . In summary, the sensitivity of all four LMR markers was higher than the best SMR marker for non-CRC, but for CRC, all markers exhibited comparably high levels of sensitivity. The size of insertion/deletion mutations in microsatellite sequences varied widely among markers . The average size shifts were larger for all four LMR markers compared with the SMR markers (14.9 bp versus 5.8 bp; P < 0.001). Most mutations were deletions, but there were two insertions in 318 MSI-H tumors detected with the Promega pentaplex panel markers and 23 insertions in 349 MSI-H tumors detected with the LMR panel markers (21 in LMRs and 2 in SMRs). A minimum size shift of 2 bp (rounded up from ≥1.5 bp) was required to classify a marker as unstable. Size shifts <1.5 bp were commonly observed in MSI-H/dMMR tumors, but they were also observed in 80% (85 of 106) of pMMR tumor in one or more markers. Therefore, markers with <1.5 bp shifts cannot reliably be considered unstable. The average marker size shift varied widely among cancer types, with small shift sizes commonly observed in non-gastrointestinal tract cancers such as endometrial, breast, and prostate . The allele frequency and percent heterozygosity for the markers in the LMR MSI Analysis System were determined for the 469 matching normal samples in the study cohort ( and ). The SMR markers all exhibited very low levels of variability as measured by total number of alleles (five to nine alleles per marker) and percent heterozygosity (0% to 3% per marker), in agreement with previous reports. In contrast, the four LMR markers exhibited a broad range of allele sizes (on average 40 alleles per marker) and much higher heterozygosity levels. This difference between SMR and LMR loci is largely attributable to the near monomorphic nature of the SMR loci, which have a common germline allele as opposed to the LMR loci, which are polymorphic and do not have a common germline allele. Heterozygosity for the autosomal LMR markers BAT-59 and BAT-60 was around 70% and lower for X-linked markers BAT-52 and BAT-56 as expected because these markers can only be heterozygous in XX female subjects. In MSI-H/dMMR tumors, mutant alleles seem to have been created predominantly by small deletions in the germline microsatellite allele. This is illustrated by the distribution of mutant alleles in non-gastrointestinal MSI-H/dMMR tumors, in which the modal peak for the most common size shift is approximately 2 bp . In contrast, CRCs exhibited right shifted modal peaks with deletions >2 bp, presumably due to the accumulation of multiple mutational events from a high number of replication cycles after loss of MMR function in rapidly dividing colon cells. This pattern is consistent with the stepwise deletion model in which a single repeat unit is altered per mutational event, and larger deletions are created through accumulation of multiple smaller events moving toward relative stability at a minimum repeat number of approximately 15 bp. The minimum estimated number of repeats observed in SMR loci ranged from 7 to 12 repeats, as opposed to 15 to 25 repeats for the LMR loci, which require more stepwise mutational events before reaching a minimum repeat number. Deletions in LMR loci involving loss of multiple repeat units in a single event may have also occurred but at a lower frequency. This is shown by the pattern of spontaneous mutant alleles in the four LMR loci from 107 MSS/pMMR tumors in which 1 repeat unit deletions accounted for 92% (54 of 59) of mutational events and larger deletions of 9 to 26 repeat units occurred in only 8%. Interassay comparison of SMR marker alterations contained in both the LMR MSI Analysis System and the Promega pentaplex panel in MSI-H cancers is shown in . Discordant alterations in NR-21 , BAT-25 , BAT-26 , and MONO-27 markers were observed in 4.3% (80 of 1856) of marker comparisons. Using only these four markers for MSI classification, the discordant calls changed the overall tumor MSI status determination in 1.1% (5 of 464) of cases. Differences in the MSI calls between panels were often due to small 1 to 2 bp changes in the size of deletions or the presence or absence of a subtle shoulder pattern that resulted in a change in marker classification. Amplicon sizes for the SMR loci are smaller in the LMR panel and therefore tend to amplify DNA from formalin-fixed, paraffin-embedded samples more robustly, which may account for some of the observed variation.
The Bethesda guidelines established a reference set of five microsatellite markers (mononucleotide and dinucleotide repeats) and a method for classifying MSI status based on the number of unstable markers observed in a tumor. These guidelines were later revised, and a panel of five mononucleotide repeat markers replaced the original panel, although the total number of markers and the cutoffs remains the same. Guidelines for MSI tumor classification using the LMR MSI Analysis System have been provided by Promega. A tumor is considered as MSI-H if two or more markers are unstable and MSS if 0 or 1 marker is unstable. The guidelines do not provide cutoff criteria for classification of an MSI-Low group. In contrast, a recent publication on the validation of the LMR MSI kit used a different cutoff of three or more unstable markers as the MSI-H and one or two unstable markers as MSI-Low. ROC analysis was therefore performed to determine the optimal cutoff value for the accurate classification of pan-cancer MSI status . The optimal cutoff for MSI-H tumor classification based on ROC analysis was two or more unstable markers with an AUC value of 0.949. There were seven samples with two unstable markers of the eight markers tested. All but one exhibited loss of MMR expression by IHC and had germline MMR pathogenic variants. There were 14 samples with 3 unstable markers of 8 markers tested. All these samples showed loss of MMR expression by IHC, and 11 had germline MMR pathogenic variants. Thus, the cutoff of two or more unstable markers for classification of MSI-H tumors determined by ROC analysis is supported by both IHC and sequencing data. The average percentage of unstable LMR panel markers in CRC and non-CRC samples is shown in . More than 93% of MSI-H/dMMR CRC exhibited instability in at least seven of eight markers compared with 61% in non-CRC samples ( P = 0.013). The number of unstable markers per dMMR tumor was more evenly distributed in non-CRC samples. This raised the question of whether the use of more markers would increase detection of MSI for non-CRC. In silico analysis for the overall sensitivity of panels of markers for MSI detection with varying levels of marker sensitivity ranging from 40% to 90% was calculated by using panels ranging in size from 3 to 50 markers with cutoffs of 20% to 40% unstable markers . These results indicate that increasing the number of markers in the LMR panel, which in this study have a sensitivity of 98.6% for CRC and 95.7% for non-CRC samples with a cutoff of 25% unstable markers, would not have significantly improved overall sensitivity for MSI detection.
The extent or intensity of MSI can vary between dMMR tumors in terms of the percent unstable microsatellite markers and in the number of repeat units inserted or deleted. Differences in the intensity of MSI among tumors was investigated as this variation may be important for predicting a patient’s response to immunotherapy. Different cancer types exhibited different levels of MSI as assessed by their MSI Intensity Scores . Cancers of the gastrointestinal tract, including the stomach, small intestine, colon, and rectum, had significantly higher MSI Intensity Scores than non-gastrointestinal cancer types ( P < 0.001). Furthermore, wide variation in MSI Intensity Scores among tumors of the same cancer type was observed. This variation in MSI intensity may be due in part to differences in the size of a tumor because MSI is believed to be a progressive phenomenon that increases in intensity over time after loss of MMR as the tumor develops. Tumor size of MSI-H/dMMR colon cancers was found to be positively associated with MSI intensity ( P < 0.001). Next, the effect of deficiencies in the four MMR genes on MSI intensity as well as the epigenetic loss of MLH1 in sporadic CRCs were assessed. MSI Intensity Scores for CRCs that were MSI-H with the LMR MSI Analysis System and dMMR with IHC were determined . Sporadic MSI-H/dMMR CRCs with epigenetic loss of MLH1 had the highest average MSI Intensity Score, followed in decreasing order by MMR gene deficiencies in MLH1 , MSH2, PMS2 , and MSH6 (sporadic MSI-H, MLH1 and MSH2 scores were significantly different from MSH6 ; P < 0.001). There was only one case in the study with an EPCAM pathogenic variant. This case was MSI-H, exhibited loss of MSH2 expression by IHC, and had an MSI Intensity Score of 49.2, which is very close to the mean score for cases with germline MSH2 pathogenic variants. In addition to the gene effects, there was variation in MSI intensity between cancer types with the same MMR gene deficiency. Finally, the association between MSI intensity and tumor immune response was examined by assessing the presence of tumor-infiltrating lymphocytes (TILs). A high level of TILs in the tumor indicates that the body has initiated an immune response against the tumor. MSI-H CRC exhibited a high level of TILs (≥1:10 lymphocyte to epithelial nuclei ratio) in 72.9% (124 of 170) of tumors compared with 3.9% (3 of 76) in MSS CRC ( P < 0.001). Overall, MSI Intensity Scores were found to vary between cancer types, among cancers of the same type, by tumor size, by MMR gene deficiency, and between pMMR and dMMR tumors. Further investigation into the utility of MSI intensity as a biomarker for personalizing disease management is needed to confirm these observations.
The Bethesda guidelines for MSI testing recommend testing paired normal and tumor samples because some of the markers in the original Bethesda panel are polymorphic, and to identify a new allele in the tumor sample the germline genotype must be known. The markers NR-21, NR-24, BAT-25, BAT-26, and MONO-27 in the Promega pentaplex panel are quasi-monomorphic, meaning that most individuals in the population have the same size allele, and therefore a common population reference standard can be used in place of a paired normal sample in most cases. , For example, a panel of quasi-monomorphic mononucleotide-repeat markers ( NR-21, NR-24, BAT-25, BAT-26, MONO-27 ) (FALCO biosystems, Kyoto, Japan) was approved as a companion diagnostic for pembrolizumab for the treatment of MSI-H solid tumors without requiring the use of matching normal samples. In contrast, the four LMR markers in the LMR MSI Analysis System are polymorphic and require matching normal samples to achieve the most accurate MSI test results ( and ). Matching normal samples are not always available; therefore, the effectiveness of testing only the tumor samples with the LMR MSI kit was investigated. The analysis criteria used for MSI testing of only tumor samples is described in detail in . Briefly, the average normal allele size and QMVR values were calculated for the four SMR markers in the LMR panel using the matching normal samples from the CCFR cohort . QMVR values were not calculated for the four LMR markers because they are polymorphic. The SMR markers were scored as unstable if a tumor allele was outside the respective normal QMVR size range or exhibited an obvious shoulder pattern. For the LMR markers, which lacked QMVR values, tumors with three or more alleles per marker or an obvious shoulder pattern were scored as unstable. In male subjects, X-linked hemizygous markers BAT-52 and BAT-56 were scored as unstable when two alleles for a given marker were observed. For the LMR MSI Analysis System, the percent agreement for determining tumor MSI status with and without matching normal sample was 97.6% for CRC samples and 90.9% for non-CRC samples ( P = 0.002) . For CRC samples, there were five false-negatives and two false-positive samples of a total of 288 tumors. For non-CRC samples, there were 15 false-negative findings and no false-positive findings of a total of 164 tumors. False-negative results were mainly due to small allele size shifts resulting in new alleles that were still within the normal QMVR range or subtle shoulders that could not reliably be called without a matching normal sample. The two false-positive sample calls were due to germline heterozygosity in the SMR marker in which one allele fell outside of the normal QMVR size range. Overall, the data indicate that testing on samples using the LMR MSI Analysis System is feasible if matching normal sample is not available, but sensitivity may be reduced, especially for non-CRC samples. The ability to correctly identify MSI status using only tumor samples was also assessed for the Promega pentaplex panel and compared versus the four SMR markers ( NR-21, BAT-25, BAT-26 , and MONO-27 ) also contained in the LMR panel . For the Promega pentaplex panel, the percent agreement for determining tumor MSI status with and without matching normal sample was 99% for CRC samples and 86.8% for non-CRC samples ( P < 0.001). Similar results were observed using just the four SMR markers from the LMR panel (for CRC and non-CRC, the percent agreement was 98% and 82.8%; P < 0.001). There were no significant differences in MSI calls using the SMR markers versus the Promega pentaplex panel for either CRC or non-CRC [CRC agreement was 98% versus 99% ( P = 0.504); non-CRC agreement was 82.8% versus 86.8% ( P = 0.363)]. Thus, MSI testing with the Promega pentaplex panel using only CRC tumor samples resulted in a nonsignificant 1% loss of sensitivity compared with tests using both tumor and normal samples, whereas there was a 13.2% loss for non-CRC samples.
LMR MSI Analysis System Performance This study evaluated the performance of the LMR MSI Analysis System for detection of MSI in a pan-cancer cohort of 469 individuals enriched for Lynch syndrome. For CRC, the sensitivity and specificity of the LMR and pentaplex panels for correctly identifying the underlying MMR status of a tumor using MMR IHC as the reference were not significantly different (sensitivity was 99% versus 97% and specificity 99% versus 99%, respectively) . For non-CRC, the sensitivity of the LMR MSI panel was significantly greater than the Promega pentaplex panel (96% versus 84%), whereas the specificity for detecting pMMR non-CRC tumors was not significantly different. Thus, the major performance benefit for the LMR panel was increased sensitivity for detecting dMMR in non-CRC. The results from the current study are consistent with a previously published validation study by Lin et al that compared the LMR MSI Analysis System with the Promega pentaplex panel. The reported sensitivity and specificity of both panels were 100% for CRC. For endometrial cancers, sensitivity was 98% and 88% for the LMR and pentaplex panels, and specificity was 100% for both panels. An earlier study by Bacher et al using a panel of five LMR markers, including BAT-52, BAT-56 , and BAT-59 from the current LMR MSI Analysis System, reported increased sensitivity of the LMR marker panel compared with the Promega pentaplex panel for detection of dMMR colon polyps. The sensitivity and specificity for detection of dMMR polyps were 100% and 96% for the LMR panel and 67% and 100% for the Promega pentaplex panel using IHC as the reference. In addition to the increased sensitivity of LMR markers in dMMR cancers, the other notable performance advantage is the larger allele size shifts of the LMR markers compared with SMR markers, providing greater confidence in calling variants . For example, the allele size shifts for MSI-H/dMMR cancers were larger for the four LMR markers (mean, 14.9 bp; range, 0 to 43 bp) compared with that of the four SMR markers (mean, 5.9 bp; range, 0 to 19 bp) in the LMR MSI panel ( P < 0.001). Size shifts varied across cancer types, with tumors of the gastrointestinal tract exhibiting the largest average size shifts (mean, 12.2 bp), and breast (mean, 3.9 bp) and prostate (mean, 3.5 bp) cancers the smallest . The average size shifts generally correlated with ability to detect the dMMR tumor phenotype, with smaller size shifts being associated with decreased sensitivity. This is due in part to the challenge of discriminating small, shifted tumor peaks from the germline stutter peaks, which can lead to misinterpretation of microsatellite patterns. The larger size shifts in LMR markers simplify analysis and increase confidence in MSI calls because novel tumor alleles are generally easily resolved from germline alleles. Another performance advantage associated with the larger size shifts in LMR markers was the improved detection of MSI in MSH6 -deficient tumors, for which other MSI assays have been reported to be suboptimal. The average size shift for MSH6 -deficient CRC in this study was about 9 bp with the LMR MSI panel, which is easily resolved by capillary electrophoresis. It has been shown that MSI sensitivity can vary depending on MSI marker panels and the method of analysis used. , The use of outdated microsatellite marker panels and interpretation methods likely accounts in part for the reported lower sensitivity for MSH6 -deficient tumors. For example, the Bethesda panel contains dinucleotide repeats that exhibit lower sensitivity for detection of MSH6 -deficient tumors. , In the current study, analysis using the LMR MSI panel of mononucleotide repeats correctly classified 97% of MSH6 -deficient tumors as MSI-H. Cutoffs and Number of Markers for Optimal MSI Classification Improved sensitivity of the LMR MSI Analysis System was achieved by optimizing the cutoffs, the number of markers, and the type of markers used. The Bethesda guidelines established the cutoff for MSI-H at two or more unstable markers or 40% of the five-marker Bethesda panel. Promega also recommends a cutoff of two or more unstable makers for MSI-H for both the LMR and pentaplex panels, although the percentage of unstable markers is different (Promega Technical Manual 649: LMR MSI Analysis System; and Promega Technical Manual 255; MSI Analysis System, Version 1.2.). The lower percentage cutoff of 25% for the LMR panel effectively increases sensitivity because alterations in any two of eight markers is considered MSI-H compared with two of five markers for the Promega pentaplex and Bethesda panels. ROC analysis of the LMR panel confirmed an optimal cutoff for MSI-H tumor classification at two or more unstable markers with an area under the curve value of 0.949 . This cutoff was further supported by orthogonal IHC and MMR sequencing data. The number of markers used for MSI determination is also an important consideration. To determine the optimal number of markers for the LMR MSI panel, the overall sensitivity for panels of varying sizes was calculated by using markers with various levels of sensitivity . The results indicate that increasing the number of markers above the eight included in the LMR panel would not substantially improve the performance of the assay. The LMR MSI Analysis System contains eight markers compared with five for the Promega pentaplex panel and uses the same cutoff of two or more markers, instead of a percentage, effectively increasing the sensitivity of the LMR assay. Finally, the repeat motif and variation in performance among markers with the same type of repeat motif can have a profound effect on the overall accuracy of the MSI test. The LMR panel consists of four mononucleotide markers shared with the Promega pentaplex panel, which have a proven performance record over decades of use for MSI testing, as well as four new LMRs. The inclusion of LMR markers was based on the finding that instability in microsatellites increases exponentially with increasing repeat length, and therefore longer repeats tend to be more sensitive. , , , , This observation has been confirmed in the current study and agrees with previous studies with LMR markers. , MSI Testing with Tumor Sample Only The Bethesda guidelines for MSI testing recommend testing paired normal and tumor samples. The reason for this requirement is that most of the markers in the original Bethesda panel are polymorphic, and to identify a new allele in the tumor sample, the germline genotype must be known. The same is also true for the LMR markers in the LMR MSI Analysis System. Matching normal samples are not always available for MSI testing, and therefore the effectiveness of testing only the tumor samples was investigated. The LMR MSI Analysis System consists of four SMR markers that are quasi-monomorphic and four LMR markers that are polymorphic. Because of this difference, two different sets of criteria were used for MSI classification, as described previously. For the LMR MSI Analysis System, the estimated percent agreement for determining tumor MSI status with and without matching normal sample was 98% for CRC and 91% for non-CRC . The ability to correctly identify MSI status using only tumor samples was also evaluated for the Promega pentaplex panel, which contains all quasi-monomorphic SMR markers. The estimated percent agreement for determining tumor MSI status with and without matching normal sample was 99% for CRC and 87% for non-CRC. Thus, the data indicate that it is feasible to conduct MSI testing using the LMR MSI Analysis System and the Promega pentaplex panel if matching normal sample is not available with minimal loss of sensitivity for CRC. However, this approach can result in decreased sensitivity for non-CRC samples. MSI Intensity Analysis of MSI by NGS can report a quantitative measurement of MSI intensity and has revealed that MSI levels in dMMR tumors are a continuous scale. Traditionally, MSI-PCR tests return a yes or no answer for the presence or absence of MSI. However, quantitative MSI measurement is possible with MSI-PCR. A favorable response of patients with MSI-H tumors to immune checkpoint inhibitor therapy has been attributed to the high mutation rate in dMMR tumors, which produces neoantigens recognized by the immune system as foreign and elicits a positive immune response. , Quantitative measures of MSI may be important as variation in the MSI intensity has been shown to influence response to immunotherapy in dMMR tumors. For example, Mandal et al investigated genomic MSI levels from tumor exomes of pMMR and dMMR gastrointestinal tumors from patients receiving anti–programmed cell death 1 therapy. They found that clinical responders were associated with higher intensities of MSI and, conversely, patients with progressive disease had the lowest levels. In the current study, variation in MSI intensity in different cancer types was observed and reported as a semi-quantitative MSI Intensity Score . The cancer types with the highest MSI scores (such as colorectal, small intestine, and gastric cancers) typically respond well to immune checkpoint inhibitor therapy, whereas those cancer types with lower MSI scores (such as prostate and breast cancers) are notoriously resistant to immune checkpoint blockade. , , MSI Intensity Scores between patients with Lynch syndrome and sporadic MSI-H cancer were not significantly different , in agreement with reports showing no difference in the immune checkpoint inhibitor therapy response rates between Lynch and sporadic MSI cancer patients. , Differential effects of MMR gene deficiencies on MSI intensity were also observed in this study, with MSH6 inactivation resulting in the lowest MSI Intensity Scores . Despite lower intensity, the LMR MSI Analysis System still detected 97% of the MSH6 -deficient tumors across all cancer types. Importantly, the body’s immune response to a tumor, as measured by the level of TILs, was associated with the MSI-H phenotype (ie, tumors exhibiting a high level of TILs were found to have a higher MSI Intensity Score). It is important to note that MSI Intensity Scores varied widely between individuals, even in those with the same type of cancer or MMR gene deficiency ( and ). Thus, MSI intensity as a biomarker may have utility for personalizing immunotherapy. Response to immune checkpoint inhibitors is complex and likely involves many variables. More research is needed to clarify the role of MSI intensity in predicting response to immune checkpoint inhibitor therapy. Strengths and Limitations of This Study Strengths of the current study include the comprehensive MSI evaluation of a large number of dMMR tumors from 20 cancer types using both the new LMR MSI Analysis System and the current gold standard MSI-PCR assay. The availability of orthogonal tests for dMMR status (ie, MMR IHC, germline MMR mutation analysis, tumor MLH1 methylation, family history) for most samples allowed for accurate determination of the performance of the LMR test. A limitation of this study was that the samples were all obtained through the CCFR and by design were primarily derived from patients with Lynch syndrome. Therefore, there were only a small number of cases for cancer types with low prevalence in Lynch syndrome, and there was a limited number of MSS cancers. The expected low number of MSS CRCs in the CCFR Lynch syndrome cohort was compensated for by inclusion of sporadic MSS CRCs. However, similar controls were unavailable for non-CRCs, which limits the reliability of specificity values calculated for non-CRCs. Undetected carriers of MMR germline pathogenic variants may exist in the study population because germline MMR sequencing for MSH6 and PMS2 mutations was not available for all cases. In addition, IHC results for MSH6 and PMS2 were not available for all cases.
This study evaluated the performance of the LMR MSI Analysis System for detection of MSI in a pan-cancer cohort of 469 individuals enriched for Lynch syndrome. For CRC, the sensitivity and specificity of the LMR and pentaplex panels for correctly identifying the underlying MMR status of a tumor using MMR IHC as the reference were not significantly different (sensitivity was 99% versus 97% and specificity 99% versus 99%, respectively) . For non-CRC, the sensitivity of the LMR MSI panel was significantly greater than the Promega pentaplex panel (96% versus 84%), whereas the specificity for detecting pMMR non-CRC tumors was not significantly different. Thus, the major performance benefit for the LMR panel was increased sensitivity for detecting dMMR in non-CRC. The results from the current study are consistent with a previously published validation study by Lin et al that compared the LMR MSI Analysis System with the Promega pentaplex panel. The reported sensitivity and specificity of both panels were 100% for CRC. For endometrial cancers, sensitivity was 98% and 88% for the LMR and pentaplex panels, and specificity was 100% for both panels. An earlier study by Bacher et al using a panel of five LMR markers, including BAT-52, BAT-56 , and BAT-59 from the current LMR MSI Analysis System, reported increased sensitivity of the LMR marker panel compared with the Promega pentaplex panel for detection of dMMR colon polyps. The sensitivity and specificity for detection of dMMR polyps were 100% and 96% for the LMR panel and 67% and 100% for the Promega pentaplex panel using IHC as the reference. In addition to the increased sensitivity of LMR markers in dMMR cancers, the other notable performance advantage is the larger allele size shifts of the LMR markers compared with SMR markers, providing greater confidence in calling variants . For example, the allele size shifts for MSI-H/dMMR cancers were larger for the four LMR markers (mean, 14.9 bp; range, 0 to 43 bp) compared with that of the four SMR markers (mean, 5.9 bp; range, 0 to 19 bp) in the LMR MSI panel ( P < 0.001). Size shifts varied across cancer types, with tumors of the gastrointestinal tract exhibiting the largest average size shifts (mean, 12.2 bp), and breast (mean, 3.9 bp) and prostate (mean, 3.5 bp) cancers the smallest . The average size shifts generally correlated with ability to detect the dMMR tumor phenotype, with smaller size shifts being associated with decreased sensitivity. This is due in part to the challenge of discriminating small, shifted tumor peaks from the germline stutter peaks, which can lead to misinterpretation of microsatellite patterns. The larger size shifts in LMR markers simplify analysis and increase confidence in MSI calls because novel tumor alleles are generally easily resolved from germline alleles. Another performance advantage associated with the larger size shifts in LMR markers was the improved detection of MSI in MSH6 -deficient tumors, for which other MSI assays have been reported to be suboptimal. The average size shift for MSH6 -deficient CRC in this study was about 9 bp with the LMR MSI panel, which is easily resolved by capillary electrophoresis. It has been shown that MSI sensitivity can vary depending on MSI marker panels and the method of analysis used. , The use of outdated microsatellite marker panels and interpretation methods likely accounts in part for the reported lower sensitivity for MSH6 -deficient tumors. For example, the Bethesda panel contains dinucleotide repeats that exhibit lower sensitivity for detection of MSH6 -deficient tumors. , In the current study, analysis using the LMR MSI panel of mononucleotide repeats correctly classified 97% of MSH6 -deficient tumors as MSI-H.
Improved sensitivity of the LMR MSI Analysis System was achieved by optimizing the cutoffs, the number of markers, and the type of markers used. The Bethesda guidelines established the cutoff for MSI-H at two or more unstable markers or 40% of the five-marker Bethesda panel. Promega also recommends a cutoff of two or more unstable makers for MSI-H for both the LMR and pentaplex panels, although the percentage of unstable markers is different (Promega Technical Manual 649: LMR MSI Analysis System; and Promega Technical Manual 255; MSI Analysis System, Version 1.2.). The lower percentage cutoff of 25% for the LMR panel effectively increases sensitivity because alterations in any two of eight markers is considered MSI-H compared with two of five markers for the Promega pentaplex and Bethesda panels. ROC analysis of the LMR panel confirmed an optimal cutoff for MSI-H tumor classification at two or more unstable markers with an area under the curve value of 0.949 . This cutoff was further supported by orthogonal IHC and MMR sequencing data. The number of markers used for MSI determination is also an important consideration. To determine the optimal number of markers for the LMR MSI panel, the overall sensitivity for panels of varying sizes was calculated by using markers with various levels of sensitivity . The results indicate that increasing the number of markers above the eight included in the LMR panel would not substantially improve the performance of the assay. The LMR MSI Analysis System contains eight markers compared with five for the Promega pentaplex panel and uses the same cutoff of two or more markers, instead of a percentage, effectively increasing the sensitivity of the LMR assay. Finally, the repeat motif and variation in performance among markers with the same type of repeat motif can have a profound effect on the overall accuracy of the MSI test. The LMR panel consists of four mononucleotide markers shared with the Promega pentaplex panel, which have a proven performance record over decades of use for MSI testing, as well as four new LMRs. The inclusion of LMR markers was based on the finding that instability in microsatellites increases exponentially with increasing repeat length, and therefore longer repeats tend to be more sensitive. , , , , This observation has been confirmed in the current study and agrees with previous studies with LMR markers. ,
The Bethesda guidelines for MSI testing recommend testing paired normal and tumor samples. The reason for this requirement is that most of the markers in the original Bethesda panel are polymorphic, and to identify a new allele in the tumor sample, the germline genotype must be known. The same is also true for the LMR markers in the LMR MSI Analysis System. Matching normal samples are not always available for MSI testing, and therefore the effectiveness of testing only the tumor samples was investigated. The LMR MSI Analysis System consists of four SMR markers that are quasi-monomorphic and four LMR markers that are polymorphic. Because of this difference, two different sets of criteria were used for MSI classification, as described previously. For the LMR MSI Analysis System, the estimated percent agreement for determining tumor MSI status with and without matching normal sample was 98% for CRC and 91% for non-CRC . The ability to correctly identify MSI status using only tumor samples was also evaluated for the Promega pentaplex panel, which contains all quasi-monomorphic SMR markers. The estimated percent agreement for determining tumor MSI status with and without matching normal sample was 99% for CRC and 87% for non-CRC. Thus, the data indicate that it is feasible to conduct MSI testing using the LMR MSI Analysis System and the Promega pentaplex panel if matching normal sample is not available with minimal loss of sensitivity for CRC. However, this approach can result in decreased sensitivity for non-CRC samples.
Analysis of MSI by NGS can report a quantitative measurement of MSI intensity and has revealed that MSI levels in dMMR tumors are a continuous scale. Traditionally, MSI-PCR tests return a yes or no answer for the presence or absence of MSI. However, quantitative MSI measurement is possible with MSI-PCR. A favorable response of patients with MSI-H tumors to immune checkpoint inhibitor therapy has been attributed to the high mutation rate in dMMR tumors, which produces neoantigens recognized by the immune system as foreign and elicits a positive immune response. , Quantitative measures of MSI may be important as variation in the MSI intensity has been shown to influence response to immunotherapy in dMMR tumors. For example, Mandal et al investigated genomic MSI levels from tumor exomes of pMMR and dMMR gastrointestinal tumors from patients receiving anti–programmed cell death 1 therapy. They found that clinical responders were associated with higher intensities of MSI and, conversely, patients with progressive disease had the lowest levels. In the current study, variation in MSI intensity in different cancer types was observed and reported as a semi-quantitative MSI Intensity Score . The cancer types with the highest MSI scores (such as colorectal, small intestine, and gastric cancers) typically respond well to immune checkpoint inhibitor therapy, whereas those cancer types with lower MSI scores (such as prostate and breast cancers) are notoriously resistant to immune checkpoint blockade. , , MSI Intensity Scores between patients with Lynch syndrome and sporadic MSI-H cancer were not significantly different , in agreement with reports showing no difference in the immune checkpoint inhibitor therapy response rates between Lynch and sporadic MSI cancer patients. , Differential effects of MMR gene deficiencies on MSI intensity were also observed in this study, with MSH6 inactivation resulting in the lowest MSI Intensity Scores . Despite lower intensity, the LMR MSI Analysis System still detected 97% of the MSH6 -deficient tumors across all cancer types. Importantly, the body’s immune response to a tumor, as measured by the level of TILs, was associated with the MSI-H phenotype (ie, tumors exhibiting a high level of TILs were found to have a higher MSI Intensity Score). It is important to note that MSI Intensity Scores varied widely between individuals, even in those with the same type of cancer or MMR gene deficiency ( and ). Thus, MSI intensity as a biomarker may have utility for personalizing immunotherapy. Response to immune checkpoint inhibitors is complex and likely involves many variables. More research is needed to clarify the role of MSI intensity in predicting response to immune checkpoint inhibitor therapy.
Strengths of the current study include the comprehensive MSI evaluation of a large number of dMMR tumors from 20 cancer types using both the new LMR MSI Analysis System and the current gold standard MSI-PCR assay. The availability of orthogonal tests for dMMR status (ie, MMR IHC, germline MMR mutation analysis, tumor MLH1 methylation, family history) for most samples allowed for accurate determination of the performance of the LMR test. A limitation of this study was that the samples were all obtained through the CCFR and by design were primarily derived from patients with Lynch syndrome. Therefore, there were only a small number of cases for cancer types with low prevalence in Lynch syndrome, and there was a limited number of MSS cancers. The expected low number of MSS CRCs in the CCFR Lynch syndrome cohort was compensated for by inclusion of sporadic MSS CRCs. However, similar controls were unavailable for non-CRCs, which limits the reliability of specificity values calculated for non-CRCs. Undetected carriers of MMR germline pathogenic variants may exist in the study population because germline MMR sequencing for MSH6 and PMS2 mutations was not available for all cases. In addition, IHC results for MSH6 and PMS2 were not available for all cases.
MSI as a biomarker has evolved over the years since its discovery in 1992. , Originally, MSI testing was used to identify individuals with Lynch syndrome, but the demand for MSI testing has exploded with the discovery that the MSI tumor status predicts response to certain types of immunotherapy. The introduction of the Bethesda microsatellite marker panel in 1998 helped to standardize MSI testing. This panel has been largely displaced by the next key advancement in MSI testing that occurred in 2004, the development of a panel of all mononucleotide repeat markers, which improved sensitivity and specificity of detecting dMMR tumors. The next major improvement in MSI-PCR testing came with the introduction of “long mononucleotide repeats” in the LMR MSI Analysis System in 2021. , In the current study, we show that the sensitivity and specificity of the LMR MSI Analysis System for detection of dMMR CRC were 99% and 96%, respectively. Importantly, the sensitivity of the LMR MSI Analysis System for detection of dMMR in non-CRCs was similar at 96%. Specificity was lower in non-CRCs than previously reported, likely due to small sample size and potential false-negative IHC results. The overall percent agreement between the LMR and pentaplex panels was high for CRC (99%) but lower for non-CRC (85%) tumors. Thus, the LMR MSI panel showed high concordance in CRC and greater sensitivity in non-CRC compared with the Promega pentaplex panel. The LMR MSI Analysis System also correctly identified 97% of challenging MSH6 -deficient tumors from eight different cancer types, including all CRCs and all but one endometrial cancer. An increased number of unstable markers and the larger size shifts observed in dMMR tumors using the LMR panel reduce ambiguity in MSI calling and the necessity to repeat runs. Thus, the introduction of the LMR MSI Analysis System takes another leap forward in the evolution of MSI testing and expands the spectrum of cancer types in which MSI can be accurately detected.
J.W.B., E.B.U., E.E.S., I.V., and D.R.S. are research scientists employed by Promega Corporation. J.R.E. has received grant funding from Promega Corporation for an independent validation study of the LMR MSI Analysis System. R.B.H. has received grant funding from Promega Corporation for research on detection of microsatellite instability in colon polyps and the utility of the LMR MSI Analysis System for predicting response to immunotherapy.
|
Comparing the Effectiveness of Different Dietary Educational Approaches for Carbohydrate Counting on Glycemic Control in Adults with Type 1 Diabetes: Findings from the DIET-CARB Study, a Randomized Controlled Trial | 425edd23-c131-4fcb-924a-59ab74c47a62 | 11547945 | Patient Education as Topic[mh] | Effective treatment of individuals with type 1 diabetes (T1D) relies on continuous self-management and evidence-based nutrition education tailored to individual needs . This nutrition education aims to empower individuals by providing them with essential skills, knowledge, confidence, and autonomy to manage the complexities of T1D, particularly in relation to dietary intake, blood glucose levels, and prandial insulin dosing . Carbohydrate counting stands out as a key aspect of diabetes-specific nutrition education for individuals with T1D on multiple daily injections (MDIs) insulin therapy, as recommended by international guidelines . It involves meticulously tracking and counting the carbohydrate intake in grams throughout the day, enabling individuals to more effectively adjust their mealtime insulin doses and regulate their blood glucose levels. Despite its importance, uncertainties persist regarding the impact of different educational approaches to carbohydrate counting, particularly concerning the most effective delivery methods and influence on clinical outcomes. Two distinct levels of carbohydrate counting, basic (BCC) and advanced (ACC), have been defined, each highlighting different approaches and levels of complexity . BCC focuses on maintaining a consistent carbohydrate intake in terms of type, amount, and distribution throughout the day, using a more intuitive and flexible approach to mealtime insulin adjustments based on prescribed insulin doses. In contrast, ACC involves personalized mealtime insulin adjustments based on carbohydrate intake using algorithms. Ideally, effective carbohydrate counting management requires accurate calculation and subsequent mealtime insulin dosing based on carbohydrate-to-insulin ratios, insulin sensitivity, and other factors (e.g., physical activity), encompassing aspects of both BCC and ACC. However, barriers such as a lack of motivation, low numeracy or literacy, and inaccurate carbohydrate estimations have been associated with poorer glycemic control using ACC . While technologies such as bolus calculator apps for smartphones aim to simplify ACC for individuals with T1D treated with MDIs therapy, they do not fully eliminate the need for individuals to self-estimate carbohydrate portion sizes, which impact glycemic outcomes . Moreover, empirical data, including clinical observations, show that ACC may be too complex for some individuals to manage effectively. Despite the proven efficacy of ACC in reducing HbA1c , evidence on the educational impact of BCC remains limited. This knowledge gap is particularly important, as BCC may offer a more feasible approach for some individuals, potentially leading to better adherence and improved glycemic outcomes. Additionally, while group-based dietary educational approaches show promise, they are relatively underexplored compared with individual dietary counseling, which remains the standard of dietary care worldwide . Understanding the comparative effectiveness of BCC and ACC, particularly in group settings, may be crucial for optimizing dietary education strategies and improving glycemic control in T1D. Accordingly, our study aimed to investigate the efficacy of two dietitian-led, group-based educational approaches for carbohydrate counting (BCC and ACC) compared with individual dietary counseling (standard of dietary care) on improving glycemic control among adults with T1D treated with MDIs after six months of treatment. We hypothesized that both BCC and ACC would be superior to standard dietary care in reducing HbA1c or the mean amplitude of glycemic excursions (MAGEs). Additionally, we hypothesized that BCC would be equivalent to ACC in reducing HbA1c or MAGEs.
2.1. Study Design and Participants A detailed description of the trial has been previously published in a protocol paper . The study was a single-center, parallel-group, randomized, controlled, open-label, superiority trial conducted at Steno Diabetes Center Copenhagen, a tertiary health care facility in the Capital Region of Denmark, over a 12-month period. Inclusion criteria comprised individuals aged 18–75 years, diagnosed with T1D and treated in an outpatient diabetes clinic in the Capital Region of Denmark, undergoing treatment with MDIs, having a diabetes duration exceeding 12 months, and an initial HbA1c level of 53–97 mmol/mol. Exclusion criteria included currently practicing carbohydrate counting or a low daily carbohydrate intake (defined as <100 g per day), engagement in a carbohydrate counting program within the past two years, use of an automated bolus calculator or insulin pump, planning to initiate insulin pump therapy during the study period, use of split-mixed insulin therapy, gastroparesis, uncontrolled medical issues affecting dietary intake, pregnancy or lactation, planning pregnancy during the study period, involvement in other clinical trials, and inability to comprehend the informed consent or the study procedures. The screening and study visits (at baseline, after the six-month intervention, and at six-month follow-up) were conducted by the study personnel. The inclusion period spanned from October 2018 to August 2021. This study was approved by the Scientific Ethics Committee in the Capital Region of Denmark and the trial is registered at ClinicalTrials.gov (NCT03623113). 2.2. Screening and Randomization Individuals interested in participating attended a screening visit. During the screening visit, eligible participants were randomly assigned in a 1:1:1 ratio by the study investigator or personnel to receive either BCC, ACC, or standard dietary care (control group) through the use of a randomization module . The randomization module was based on a randomization list that was generated and uploaded to the electronic data management system REDCap (version 8.10.18, Vanderbilt University, Nashville, TN, USA) by an external statistician prior to the commencement of the trial. 2.3. Interventions Participants assigned to standard dietary care attended three individual dietary counselling sessions, totaling 2 h, conducted at week 0, 2, and 12. In these sessions, personal dietary goals were established based on the overall metabolic goal, preferences for dietary adjustments were explored, guidance on carbohydrate awareness including glycemic index of foods, meal planning, and portion management was provided, and personal queries or worries regarding dietary management of T1D were addressed. Participants in the BCC group attended three group sessions (4–8 participants), including a total duration of 8 h, held at week 0, 2 and 12. This structured group-based BCC program was designed to empower participants in managing their postprandial blood glucose levels by regulating their carbohydrate intake. The program included concise theoretical presentations on food and nutrition in relation to diabetes, engaging in problem-solving exercises, and practical sessions focusing on identifying carbohydrates and estimating carbohydrate portion sizes across various foods. Participants explored different methods of carbohydrate monitoring, including the use of nutrition labels, interpreting carbohydrate tables, and employing smartphone applications. Additionally, participants were instructed to keep a dietary log to track carbohydrate intake and blood glucose levels over a 4-day period, facilitating the development of a personal carbohydrate plan. The program also included discussions on dietary coping strategies and incorporated peer modeling and support. Participants in the ACC group attended one group session (4–8 participants), lasting 4 h, held at week 0, followed by two individual dietary follow-up sessions totaling 1.5 h conducted at week 2 and 12. The ACC program included instruction on how to use an automated bolus calculator (MySugr Pro. Roche Diabetes, app available in Google Play and AppStore). The bolus calculator was set with personalized ratios for the insulin sensitivity factor (for blood glucose adjustments) and carbohydrate-to-insulin dosing at meals. These ratios were estimated by a dietitian based on each participant’s 7-day dietary recordings, including blood glucose measurements and mealtime insulin dosages. The teaching approach integrated theoretical and practical training, drawing on real-life examples and experiences with T1D. Sessions in all three study groups were conducted by the same trained dietitians, following a structured curriculum and supervision by an endocrinologist if necessary. Further details regarding the BCC and ACC programs, as well as standard dietary care, are available in the protocol paper . Participants were advised to maintain consistent physical activity patterns throughout the study, while medical adjustments, including insulin adjustments, were permitted when needed. 2.4. Compliance Participants in the ACC group were advised to use the automated bolus calculator for meals with 15 g or more of carbohydrates, while participants in the BCC group were instructed to follow their personal carbohydrate plan daily for all meals . Compliance with automated bolus calculator usage in the ACC group was assessed through exported app data indicating each instance usage. Compliance with the personal carbohydrate plan in the BCC group was assessed based on the question: “How often do your meals deviate from your personal carbohydrate plan prescribed by the dietitian?”, using a visual analogue scale (VAS) ranging from never (0) to always (100). Compliance was not assessed for the standard dietary care group. 2.5. Outcome Measures The primary outcomes included changes in HbA1c and MAGEs, with the latter measuring glycemic variability, from baseline to end-of-treatment at six months. Secondary and exploratory outcomes included changes from baseline to end-of-treatment at six months and after six months of follow-up in other clinically relevant metabolic markers including time in range (TIR), time below range (TBR), time above range (TAR), coefficient of variation (CV), mean plasma glucose based on data from a blinded continuous glucose monitoring (CGM) device, and HbA1c (at six months follow-up), as well as body weight, body composition, blood pressure and lipid profile, total insulin dose, prandial insulin dose, and basal insulin dose. Additionally, changes in skills related to numeracy and carbohydrate estimation accuracy, patient-reported outcomes (diabetes diet-related quality of life (DDQOL), perceived dietitian-related autonomy support (HCCQ), and competencies in diet and diabetes (PCDS)) and behavioral outcomes (dietary changes in intake of total energy, macronutrients, added sugar, and dietary fibers based on 4-day dietary recordings), and changes in level of physical activity assessed by the Danish version of the International Physical Activity Questionnaire—Short Form (IPAQ-SF) were assessed from baseline to end-of-treatment at six months and after six months of follow-up (diet only at baseline and after six months intervention). Details regarding these outcomes can be found in the protocol paper . 2.6. Sample Size The trial was designed with 80% statistical power (α = 0.05) to detect a difference in HbA1c of 3.5 mmol/mol (SD 7 mmol/mol) between the BCC group and the standard care group or the ACC group and the standard care group. This determination was primarily informed by findings from experimental studies investigating the impact of BCC and meta-analyses of RCTs evaluating the effect of ACC compared with a control or usual dietary care group. These studies found HbA1c reductions ranging from 3 to 7 mmol/mol in adults with T1D. Notably, participants in these studies exhibited poorer diabetes control (60−108 mmol/mol) compared with our study’s eligible participants (53–97 mmol/mol). Thus, we anticipated smaller HbA1c reductions in our study population but still deemed them clinically significant within a multidisciplinary approach for managing hyperglycemia in T1D. The clinical target for MAGEs in T1D is still unknown , but the trial was designed to detect a difference of ≥0.35 mmol/L (SD 0.7 mmol/L) in MAGEs between the groups . Taking these assumptions into account, along with an anticipated 20% dropout rate, the required sample size was calculated to include 231 participants in total, with 77 participants assigned to each group. 2.7. Changes Due to the COVID-19 Pandemic Due to the COVID-19 pandemic, all non-urgent outpatient appointments were transitioned to virtual appointments from March until September 2020, and again from November 2020 to February 2021, due to a resurgence in COVID-19 cases. Additionally, scheduled study visits for enrolled participants were postponed during these lockdown periods. Consequently, most visits, particularly the final six-month follow-up visits, were delayed and spread out over a longer period than originally planned. As a result, the trial stopped participant recruitment in September 2021 before reaching the intended sample size and without prior data review . 2.8. Statistical Analyses Baseline data are reported as means with standard deviations (SD) for continuous variables following a normal distribution and as medians with interquartile ranges (25th and 75th percentiles) for non-normally distributed variables. Categorical variables are presented as frequencies and percentages. Intention-to-treat analyses, using all available data, were performed to compare treatment effects across the study groups for the prespecified primary outcomes, HbA1c and MAGEs, and selected secondary and exploratory outcomes. Treatment effects are presented as baseline-adjusted differences between groups for all outcomes. Linear mixed-effects models were used to model the outcomes, with baseline corrections made by setting all participants in the control group at baseline. Fixed effects included visit and the interaction between treatment group and visit. Before estimating treatment effects, residuals were evaluated graphically to check assumptions of normality and homogeneity of variances. Where needed, outcomes were log-transformed for analysis and subsequently back-transformed for presentation. The estimated mean differences in changes (with 95% confidence intervals) between and within groups are provided, along with two-sided p-values. Equivalence testing was conducted to compare the effect of BCC versus ACC on HbA1c and MAGEs. Equivalence was established if the 90% confidence intervals (CI) for the estimated difference in chance in HbA1c or MAGEs between the two groups fell entirely within the predefined equivalence margins according to the statistical analyses plan . If the confidence interval exceeded these margins in either direction (negative or positive), equivalence was not claimed. The analysis was performed using the same linear mixed-effects model described above. Non-parametric tests (Wilcoxon) were used to assess changes in summed scores from baseline to the end of treatment for the three psychometric tests (DDQOL, HCCQ, and PCDS) due to the non-normal distribution of the data. Statistical significance was determined using a two-tailed p -value of <0.05. The false discovery rate (FDR) for secondary and exploratory outcomes was controlled with the Benjamini and Hochberg method, applying a threshold of <5% . Missing data were handled using maximum likelihood estimation in the linear mixed model, under the assumption that the data were missing at random. All statistical analyses were conducted using SAS Enterprise Guide version 8.3 Update 3 (SAS Institute Inc., Cary, NC, USA) and R software version 4.0.2 (R Core Team, R Foundation for Statistical Computing, Vienna, Austria).
A detailed description of the trial has been previously published in a protocol paper . The study was a single-center, parallel-group, randomized, controlled, open-label, superiority trial conducted at Steno Diabetes Center Copenhagen, a tertiary health care facility in the Capital Region of Denmark, over a 12-month period. Inclusion criteria comprised individuals aged 18–75 years, diagnosed with T1D and treated in an outpatient diabetes clinic in the Capital Region of Denmark, undergoing treatment with MDIs, having a diabetes duration exceeding 12 months, and an initial HbA1c level of 53–97 mmol/mol. Exclusion criteria included currently practicing carbohydrate counting or a low daily carbohydrate intake (defined as <100 g per day), engagement in a carbohydrate counting program within the past two years, use of an automated bolus calculator or insulin pump, planning to initiate insulin pump therapy during the study period, use of split-mixed insulin therapy, gastroparesis, uncontrolled medical issues affecting dietary intake, pregnancy or lactation, planning pregnancy during the study period, involvement in other clinical trials, and inability to comprehend the informed consent or the study procedures. The screening and study visits (at baseline, after the six-month intervention, and at six-month follow-up) were conducted by the study personnel. The inclusion period spanned from October 2018 to August 2021. This study was approved by the Scientific Ethics Committee in the Capital Region of Denmark and the trial is registered at ClinicalTrials.gov (NCT03623113).
Individuals interested in participating attended a screening visit. During the screening visit, eligible participants were randomly assigned in a 1:1:1 ratio by the study investigator or personnel to receive either BCC, ACC, or standard dietary care (control group) through the use of a randomization module . The randomization module was based on a randomization list that was generated and uploaded to the electronic data management system REDCap (version 8.10.18, Vanderbilt University, Nashville, TN, USA) by an external statistician prior to the commencement of the trial.
Participants assigned to standard dietary care attended three individual dietary counselling sessions, totaling 2 h, conducted at week 0, 2, and 12. In these sessions, personal dietary goals were established based on the overall metabolic goal, preferences for dietary adjustments were explored, guidance on carbohydrate awareness including glycemic index of foods, meal planning, and portion management was provided, and personal queries or worries regarding dietary management of T1D were addressed. Participants in the BCC group attended three group sessions (4–8 participants), including a total duration of 8 h, held at week 0, 2 and 12. This structured group-based BCC program was designed to empower participants in managing their postprandial blood glucose levels by regulating their carbohydrate intake. The program included concise theoretical presentations on food and nutrition in relation to diabetes, engaging in problem-solving exercises, and practical sessions focusing on identifying carbohydrates and estimating carbohydrate portion sizes across various foods. Participants explored different methods of carbohydrate monitoring, including the use of nutrition labels, interpreting carbohydrate tables, and employing smartphone applications. Additionally, participants were instructed to keep a dietary log to track carbohydrate intake and blood glucose levels over a 4-day period, facilitating the development of a personal carbohydrate plan. The program also included discussions on dietary coping strategies and incorporated peer modeling and support. Participants in the ACC group attended one group session (4–8 participants), lasting 4 h, held at week 0, followed by two individual dietary follow-up sessions totaling 1.5 h conducted at week 2 and 12. The ACC program included instruction on how to use an automated bolus calculator (MySugr Pro. Roche Diabetes, app available in Google Play and AppStore). The bolus calculator was set with personalized ratios for the insulin sensitivity factor (for blood glucose adjustments) and carbohydrate-to-insulin dosing at meals. These ratios were estimated by a dietitian based on each participant’s 7-day dietary recordings, including blood glucose measurements and mealtime insulin dosages. The teaching approach integrated theoretical and practical training, drawing on real-life examples and experiences with T1D. Sessions in all three study groups were conducted by the same trained dietitians, following a structured curriculum and supervision by an endocrinologist if necessary. Further details regarding the BCC and ACC programs, as well as standard dietary care, are available in the protocol paper . Participants were advised to maintain consistent physical activity patterns throughout the study, while medical adjustments, including insulin adjustments, were permitted when needed.
Participants in the ACC group were advised to use the automated bolus calculator for meals with 15 g or more of carbohydrates, while participants in the BCC group were instructed to follow their personal carbohydrate plan daily for all meals . Compliance with automated bolus calculator usage in the ACC group was assessed through exported app data indicating each instance usage. Compliance with the personal carbohydrate plan in the BCC group was assessed based on the question: “How often do your meals deviate from your personal carbohydrate plan prescribed by the dietitian?”, using a visual analogue scale (VAS) ranging from never (0) to always (100). Compliance was not assessed for the standard dietary care group.
The primary outcomes included changes in HbA1c and MAGEs, with the latter measuring glycemic variability, from baseline to end-of-treatment at six months. Secondary and exploratory outcomes included changes from baseline to end-of-treatment at six months and after six months of follow-up in other clinically relevant metabolic markers including time in range (TIR), time below range (TBR), time above range (TAR), coefficient of variation (CV), mean plasma glucose based on data from a blinded continuous glucose monitoring (CGM) device, and HbA1c (at six months follow-up), as well as body weight, body composition, blood pressure and lipid profile, total insulin dose, prandial insulin dose, and basal insulin dose. Additionally, changes in skills related to numeracy and carbohydrate estimation accuracy, patient-reported outcomes (diabetes diet-related quality of life (DDQOL), perceived dietitian-related autonomy support (HCCQ), and competencies in diet and diabetes (PCDS)) and behavioral outcomes (dietary changes in intake of total energy, macronutrients, added sugar, and dietary fibers based on 4-day dietary recordings), and changes in level of physical activity assessed by the Danish version of the International Physical Activity Questionnaire—Short Form (IPAQ-SF) were assessed from baseline to end-of-treatment at six months and after six months of follow-up (diet only at baseline and after six months intervention). Details regarding these outcomes can be found in the protocol paper .
The trial was designed with 80% statistical power (α = 0.05) to detect a difference in HbA1c of 3.5 mmol/mol (SD 7 mmol/mol) between the BCC group and the standard care group or the ACC group and the standard care group. This determination was primarily informed by findings from experimental studies investigating the impact of BCC and meta-analyses of RCTs evaluating the effect of ACC compared with a control or usual dietary care group. These studies found HbA1c reductions ranging from 3 to 7 mmol/mol in adults with T1D. Notably, participants in these studies exhibited poorer diabetes control (60−108 mmol/mol) compared with our study’s eligible participants (53–97 mmol/mol). Thus, we anticipated smaller HbA1c reductions in our study population but still deemed them clinically significant within a multidisciplinary approach for managing hyperglycemia in T1D. The clinical target for MAGEs in T1D is still unknown , but the trial was designed to detect a difference of ≥0.35 mmol/L (SD 0.7 mmol/L) in MAGEs between the groups . Taking these assumptions into account, along with an anticipated 20% dropout rate, the required sample size was calculated to include 231 participants in total, with 77 participants assigned to each group.
Due to the COVID-19 pandemic, all non-urgent outpatient appointments were transitioned to virtual appointments from March until September 2020, and again from November 2020 to February 2021, due to a resurgence in COVID-19 cases. Additionally, scheduled study visits for enrolled participants were postponed during these lockdown periods. Consequently, most visits, particularly the final six-month follow-up visits, were delayed and spread out over a longer period than originally planned. As a result, the trial stopped participant recruitment in September 2021 before reaching the intended sample size and without prior data review .
Baseline data are reported as means with standard deviations (SD) for continuous variables following a normal distribution and as medians with interquartile ranges (25th and 75th percentiles) for non-normally distributed variables. Categorical variables are presented as frequencies and percentages. Intention-to-treat analyses, using all available data, were performed to compare treatment effects across the study groups for the prespecified primary outcomes, HbA1c and MAGEs, and selected secondary and exploratory outcomes. Treatment effects are presented as baseline-adjusted differences between groups for all outcomes. Linear mixed-effects models were used to model the outcomes, with baseline corrections made by setting all participants in the control group at baseline. Fixed effects included visit and the interaction between treatment group and visit. Before estimating treatment effects, residuals were evaluated graphically to check assumptions of normality and homogeneity of variances. Where needed, outcomes were log-transformed for analysis and subsequently back-transformed for presentation. The estimated mean differences in changes (with 95% confidence intervals) between and within groups are provided, along with two-sided p-values. Equivalence testing was conducted to compare the effect of BCC versus ACC on HbA1c and MAGEs. Equivalence was established if the 90% confidence intervals (CI) for the estimated difference in chance in HbA1c or MAGEs between the two groups fell entirely within the predefined equivalence margins according to the statistical analyses plan . If the confidence interval exceeded these margins in either direction (negative or positive), equivalence was not claimed. The analysis was performed using the same linear mixed-effects model described above. Non-parametric tests (Wilcoxon) were used to assess changes in summed scores from baseline to the end of treatment for the three psychometric tests (DDQOL, HCCQ, and PCDS) due to the non-normal distribution of the data. Statistical significance was determined using a two-tailed p -value of <0.05. The false discovery rate (FDR) for secondary and exploratory outcomes was controlled with the Benjamini and Hochberg method, applying a threshold of <5% . Missing data were handled using maximum likelihood estimation in the linear mixed model, under the assumption that the data were missing at random. All statistical analyses were conducted using SAS Enterprise Guide version 8.3 Update 3 (SAS Institute Inc., Cary, NC, USA) and R software version 4.0.2 (R Core Team, R Foundation for Statistical Computing, Vienna, Austria).
We assessed 144 individuals for eligibility: 48 either declined the invitation or did not respond and 33 did not meet the inclusion criteria. Ultimately, 63 participants were enrolled and randomly assigned: 20 to the BCC group, 21 to the ACC group, and 22 to the standard care group (control). A flow diagram, included as , illustrates the number of participants who dropped out or were lost to follow-up in each group during the various phases of the trial, along with the reasons for these occurrences. Analyses are based on data from 53 out of the 63 participants, as 10 participants (BCC, n = 2; ACC, n = 3; standard care, n = 5) withdrew before baseline data had been collected. displays the clinical and sociodemographic data for individuals who dropped out before baseline measurements. Data were collected during the screening visit after study inclusion. In general, more women and participants with a higher HbA1c dropped out. The baseline characteristics of the participants by allocation are shown in and in . Overall, the study population had an average age around 44 years with 70% males, an average diabetes duration of 17 years, and moderately uncontrolled glycemic regulation with a median HbA1c of 64 mmol/mol. Fifty-five percent used a CGM or a flash glucose monitoring (FGM) device at inclusion. During the intervention period, seven participants initiated the use of GCM/FGM (n = 3 in ACC, n = 2 in BCC, and n = 2 in standard care). At follow-up, one more participant in ACC, one more in BCC, and two more in standard care had initiated CGM/FGM use. Three participants had been prescribed glucagon-like peptide 1 receptor agonists (GLP−1RAs), while 25% were on antihypertensive medication, and 32% had been prescribed lipid-lowering medication . Few changes in these prescribed drugs were observed during the study period . The average number of outpatient diabetes clinic visits with an endocrinologist during the intervention period was 1.3 for the BCC group, 1.1 for the ACC group, and 1.0 for the standard care group. For nurse consultations, the averages were 1.2 (BCC), 0.9 (ACC), and 0.8 (standard care). Additionally, participants attended an average of 2.9 group-course dietitian sessions (BCC), 3.1 mixed group and individual sessions (ACC), or 3.0 individual dietitian consultations (standard care). During the follow-up period, the average number of visits with an endocrinologist was 0.9 for the BCC group, 1.1 for the ACC group, and 0.8 for the standard care group. Nurse consultations averaged 0.5 (BCC), 0.9 (ACC), and 0.8 (standard care), while dietitian visits were 0.0 (BCC), 0.1 (ACC), and 0.1 (standard care). presents self-reported data on the frequency and methods used (experience-based versus carbohydrate calculations) for mealtime insulin dosing at baseline, end-of-treatment, and follow-up for all study groups. 3.1. Compliance Data from the automated bolus calculator indicated that 60% of participants in the ACC group utilized the bolus calculator app multiple times daily, 13% used it several days per week, and the remaining 27% employed the app more intermittently during the intervention period. Between end-of-treatment and follow-up, 50% of participants still used the bolus calculator multiple times daily, 14% used it intermittently, and 36% had stopped using the bolus calculator app. In the BCC group, participants reported using the personal carbohydrate plan 51% of the time (IQR 24, 73) during the intervention period and 48% of the time (IQR 38, 61) at follow-up. 3.2. Primary Outcomes Compared with standard care, no treatment effects were observed for the BCC intervention on HbA1c (1 mmol/mol (−3 to 5 [0.1%, −0.3 to 0.5]); p = 0.663), or MAGEs (0.4 mmol/L (−1.1 to 1.9); p = 0.590), nor for the ACC intervention on HbA1c (−1 mmol/mol (−4 to 3 [−0.1%, −0.4 to 0.3]); p = 0.779) or MAGEs (0.7 mmol/L (−0.8 to 2.1); p = 0.360) from baseline to end-of-treatment at six months ( A,B, ). Individual changes in HbA1c and MAGEs from baseline to end-of-treatment for completers in all three study groups are shown in C,D. For HbA1c, the 90% CI for the estimated difference in chance between BCC and ACC was −1.23 to 1.78 mmol/mol. This did not surpass the predefined equivalence margin of ±3.5 mmol/mol. For MAGEs, the 90% CI for the estimated difference in chance between BCC and ACC was −5.02 to 2.38 mmol/L, which surpass the predefined equivalence margins of ±0.35 mmol/L. These results indicate an equivalence in effect between the BCC and ACC interventions on HbA1c, but not on MAGEs, suggesting the two interventions may differ in their effect on this outcome. 3.3. Secondary and Exploratory Outcomes No differences in secondary or exploratory outcomes were found between the study groups, except for total energy intake and saturated fat intake, which remained significant in favor of ACC after multiple testing adjustments. The estimated treatment difference was −10 g/day (95% CI: −16 to −5; p < 0.001) for saturated fat and −2204 kJ/day (95% CI: −3281 to −1126; p < 0.001) for total energy intake (shown in ). Changes from baseline to end-of-intervention in carbohydrate intake, median carbohydrate estimation errors, insulin dose, and time-in-range are presented in A,D. Delta values for changes in person-reported outcomes from baseline to end-of-intervention are presented in for diabetes diet-related quality of life (DDQOL) and for perceived dietitian-related autonomy support (HCCQ) and competencies in diet and diabetes (PCDS). Additional supplementary secondary/exploratory outcomes are presented in .
Data from the automated bolus calculator indicated that 60% of participants in the ACC group utilized the bolus calculator app multiple times daily, 13% used it several days per week, and the remaining 27% employed the app more intermittently during the intervention period. Between end-of-treatment and follow-up, 50% of participants still used the bolus calculator multiple times daily, 14% used it intermittently, and 36% had stopped using the bolus calculator app. In the BCC group, participants reported using the personal carbohydrate plan 51% of the time (IQR 24, 73) during the intervention period and 48% of the time (IQR 38, 61) at follow-up.
Compared with standard care, no treatment effects were observed for the BCC intervention on HbA1c (1 mmol/mol (−3 to 5 [0.1%, −0.3 to 0.5]); p = 0.663), or MAGEs (0.4 mmol/L (−1.1 to 1.9); p = 0.590), nor for the ACC intervention on HbA1c (−1 mmol/mol (−4 to 3 [−0.1%, −0.4 to 0.3]); p = 0.779) or MAGEs (0.7 mmol/L (−0.8 to 2.1); p = 0.360) from baseline to end-of-treatment at six months ( A,B, ). Individual changes in HbA1c and MAGEs from baseline to end-of-treatment for completers in all three study groups are shown in C,D. For HbA1c, the 90% CI for the estimated difference in chance between BCC and ACC was −1.23 to 1.78 mmol/mol. This did not surpass the predefined equivalence margin of ±3.5 mmol/mol. For MAGEs, the 90% CI for the estimated difference in chance between BCC and ACC was −5.02 to 2.38 mmol/L, which surpass the predefined equivalence margins of ±0.35 mmol/L. These results indicate an equivalence in effect between the BCC and ACC interventions on HbA1c, but not on MAGEs, suggesting the two interventions may differ in their effect on this outcome.
No differences in secondary or exploratory outcomes were found between the study groups, except for total energy intake and saturated fat intake, which remained significant in favor of ACC after multiple testing adjustments. The estimated treatment difference was −10 g/day (95% CI: −16 to −5; p < 0.001) for saturated fat and −2204 kJ/day (95% CI: −3281 to −1126; p < 0.001) for total energy intake (shown in ). Changes from baseline to end-of-intervention in carbohydrate intake, median carbohydrate estimation errors, insulin dose, and time-in-range are presented in A,D. Delta values for changes in person-reported outcomes from baseline to end-of-intervention are presented in for diabetes diet-related quality of life (DDQOL) and for perceived dietitian-related autonomy support (HCCQ) and competencies in diet and diabetes (PCDS). Additional supplementary secondary/exploratory outcomes are presented in .
Our study found that group-based education in ACC and BCC did not lead to improvements in glycemic control, as measured by HbA1c and MAGEs, when compared with individual dietary counseling for individuals with T1D treated with MDIs. ACC and BCC differed in their effect on MAGEs, but not in their effect on HbA1c. No further relevant effects were seen for the secondary or exploratory outcomes. These findings contrast with systematic reviews and meta-analyses based on up to six randomized trials, which have reported HbA1c reductions between 4 mmol/mol (0.4%) to 7 mmol/mol (0.6%) in favor of ACC when compared with a control group in adults with T1D, although substantial heterogeneity was reported across the included trials . The fact that the estimated target sample size was not reached in our study may have contributed to the lack of observed effects, leading to inconclusive study results. However, key differences in study design and the patient populations may have influenced the results. Notably, previous randomized trials often included participants with more poorly controlled T1D, characterized by higher baseline HbA1c levels (≥59 mmol/mol (7.5%)), as an inclusion criterion and used general diabetes education or usual care as the control, without any dietary intervention . In contrast, our study used individual dietary counseling as the control. These differences may explain some of the observed variation in outcomes, suggesting that group-based carbohydrate counting interventions may be less effective for individuals with moderately uncontrolled T1D. Interestingly, all three study groups showed improvements in glycemic regulation during both the intervention and follow-up periods. While this may reflect a study or time effect, it is also possible that the uniform improvement across all study groups suggests that the dietary interventions, regardless of their format, had a beneficial impact on glycemic regulation. No changes in the ability to estimate carbohydrates accurately were found after both BCC and ACC interventions compared with standard dietary care. In our previous study involving participants with uncontrolled type 2 diabetes, we found that participants significantly improved their carbohydrate estimation skills after BCC education; however, their baseline skills were notably poorer compared with our participants with T1D . This aligns with the observation that most individuals with T1D have, at some point, engaged in improving their carbohydrate assessment skills after diagnosis, whereas this has not traditionally been a focus in type 2 diabetes management. We found that total energy intake and intake of saturated fat were reduced in the ACC group compared with the standard dietary care group, even after adjusting for multiple testing. However, these reductions did not translate into clinically meaningful improvements, such as greater weight loss associated with lower energy intake or reductions in plasma cholesterol levels following the reduction in saturated fat intake in the ACC group at the end of the intervention. These changes in dietary outcomes may be influenced by the inherent limitations of self-reported dietary intake, which is often subject to inaccuracies, particularly underreporting and selective misreporting (e.g., the underreporting of unhealthy foods high in sugar and fat and the overreporting of healthy foods such as vegetables) . These inaccuracies also complicate the interpretation of the effects of dietary changes on clinical outcomes. This study faced several important limitations, including a substantially smaller sample size than planned and delays in study visits due to COVID-19, both of which reduced the statistical power of our trial. The observed differences in the effects of BCC and ACC on MAGEs may also be attributed to the lack of power, as the width of the confidence intervals heavily influenced by sample size. Additionally, there was a high dropout rate after study inclusion but prior to study commencement, particularly in the standard dietary care group, which may have biased the results. This dropout does not seem to be primarily due to a preference for group interventions, as only one participant explicitly cited this reason. Instead, most dropouts across all groups were attributed to external factors such as COVID-19 and personal issues. This suggests that the mode of intervention (individual vs. group) was likely not an important factor in participant retention, with external circumstances playing a more prominent role. The BCC program was designed to improve carbohydrate counting accuracy and ensure day-to-day consistency of carbohydrate intake according to a personalized carbohydrate plan, reducing carbohydrate overload at each meal. Meanwhile, the ACC program aimed to enhance mealtime insulin dose accuracy through the use of an automated bolus calculator. However, compliance data indicated that only approximately 60% of participants in the ACC group used their bolus calculator app several times daily during the study. Similarly, self-reported data showed that use of the personal carbohydrate plan was moderate in the BCC group, suggesting limited use by a notable proportion of participants. This lack of sustained adherence to the dietary interventions may be attributed to issues of acceptability, feasibility, and insufficient motivation to fully engage with the prescribed methods. Considering that participants in all three study groups had only three visits with a dietitian, either in groups or individually, ongoing support and follow-up, such as regular check-ins via phone or digital platforms, could have enhanced participant engagement and motivation. This was particularly relevant during the COVID-19 pandemic, when most individuals were homebound and poorer dietary habits have been reported . The involvement of multiple dietitians in this study was intended to replicate real-world outpatient diabetes practice. However, this approach may have inadvertently undermined the consistency in the interventions. Although feasibility testing was conducted with a group of adults with T1D using MDIs insulin therapy prior to study, greater investment in user involving in the development of the study interventions might have further improved adherence and, potentially, glycemic outcomes. Another notable limitation is the use of MAGEs as a primary outcome for assessing glycemic variability. When this study was designed in 2017, MAGEs was more commonly used in research settings in Denmark. However, in recent years, international guidelines have recommended time in range as a more reliable and clinically relevant metric for evaluating glycemic control in both research and clinical contexts, especially with the widespread use of CGM and FGM devices in individuals with T1D . Moreover, this study’s design, which predates substantial advancements in diabetes technology, may not fully represent current clinical practices. At the study’s outset, only 55% of participants were using CGM or FGM devices for diabetes self-management. By August 2024, however, 95% of adults with T1D in our clinic were utilizing glucose sensor technology. Nationally, the adoption rate has also risen, though it remains lower, with approximately 60% of adults with T1D currently using such devices . These technological advancements have markedly altered how diabetes is managed, particularly through the utilization of real-time glucose data for more precise insulin dose adjustments. The widespread use of glucose sensors has independently improved diabetes self-management glycemic outcomes . Consequently, dietary education programs, such as those employed in this study, may need to be adapted to incorporate these modern technological tools and practices. In addition, the growing integration of advanced diabetes technologies, and artificial intelligence (AI), into the management of T1D presents considerable potential for improving glycemic control through decision support systems that optimize automated insulin therapy . Recent advancements, including automated insulin delivery (AID) systems and more sophisticated insulin pumps, are reshaping how insulin is administered and how dietary education should be approached in clinical settings. However, while AID systems enable individuals with difficulties in precise carbohydrate counting to achieve better glycemic outcomes, higher accuracy in carbohydrate estimation remains crucial for optimal glycemic control . At the same time, AI-driven food recognition systems, though not yet implemented at a national level, are steadily evolving, with the potential to improve carbohydrate estimation accuracy based on meal images . Future educational programs should incorporate these technological advancements, utilizing AI and digital health tools to facilitate more personalized and adaptive approaches to insulin therapy. Furthermore, large-scale diabetes data ecosystems, integrating data from multiple interoperable devices (CGM, insulin pumps, smart pens, and activity trackers), are increasingly employing AI techniques to personalize therapy and enhance patient outcomes . While our study primarily focused on traditional carbohydrate counting methods and MDIs therapy—which remain prevalent among the population studied—it is clear that future T1D management will necessitate the integration of traditional methods with cutting-edge technological innovations. Recognizing the critical role of AI and connected devices in insulin therapy represents a pivotal shift in diabetes care which should be reflected in ongoing research and clinical practice. Currently, the ACC and BCC programs have been merged into a standardized dietary care model offered to adults with T1D undergoing MDIs therapy at our clinic. This includes a 3 h group session dedicated to practical carbohydrate counting education (shortened but similar to the original BCC program), followed by another 3 h session focused on accurate insulin dose adjustment (similar to the original ACC program). These sessions cover the calculation of individualized insulin-to-carbohydrate ratios, insulin sensitivity factors, and the use of various bolus calculators based on personal preferences in conjunction with CGM or FGM devices. A follow-up individualized dietary counseling session (45–60 min), preferably conducted digitally, addresses personal challenges in applying these strategies. The implementation of this combined group-based and individualized dietary education model, as exemplified in our current ACC–BCC program, represents an adaptable and cost-effective approach. Preliminary cost estimates suggest that the time and resources required to prepare and educate individuals with T1D on MDIs therapy amount to an average of 2.5 h of a dietitian’s time per patient, consistent across all three original interventions in the study, as well as the newly integrated ACC–BCC program. The cost estimation indicates that tailoring the education to fit patient preferences and needs can be achieved without increasing overall time and resource demands.
In conclusion, our study found no significant improvements in HbA1c or MAGEs from group-based interventions with practical, interactive education in the ACC or BCC programs compared with individual dietary counseling in adults with longstanding, moderately uncontrolled T1D on MDIs insulin therapy. This may reflect either the absence of a clinical effect or study limitations like the small sample size and adherence challenges. While the findings remain relevant, particularly for populations still utilizing MDIs therapy, the rapid advancements in technology demand continuous updates to dietary education and insulin dosing strategies. Future research should explore how emerging tools such as AI and automated systems can enhance carbohydrate counting and insulin therapy, ensuring that clinical practices evolve alongside technological innovations in diabetes management, while also recognizing the critical importance of patient engagement and preferences in optimizing outcomes in everyday diabetes care.
|
Efficacy of different forms of concentrated growth factors combined with deproteinized bovine bone minerals in guided bone regeneration: a randomized clinical trial | 77dcb718-00dd-48d1-b243-2fae9e4aab00 | 11869682 | Dentistry[mh] | Guided bone regeneration (GBR) is a common method for the repair of peri-implant bone defects and offers the advantages of good bone formation predictability, a low long-term bone resorption rate, convenient filling and shaping, no secondary region, and few surgical complications when used for the restoration of damaged regions . GBR uses granular graft material and barrier membrane and has become the standard procedure that has enabled sustained clinical success . Various bone transplant materials are currently available, including autologous, allogeneic, and xenograft materials. One of the most popular xenografts is deproteinized bovine bone mineral (DBBM). DBBM is a xenograft derived from species genetically unrelated to the host. Due to the high temperature, the organic components in DBBM are removed, so the ideal biocompatibility can be achieved. Additionally, the crystal structure of DBBM closely resembles that of human cancellous bone, providing an excellent scaffold for new bone formation . Compared to autologous bone grafts, DBBM minimizes surgical trauma, eliminates the need for a secondary surgical site, and reduces postoperative complications . However, DBBM has certain limitations. For example, DBBM does not initiate new bone regeneration by itself and cannot synchronize with the osteogenic rate . Growth factors are pivotal in tissue regeneration and reconstruction . Some research has shown that adding biological promoters containing the necessary growth factors to the graft material can accelerate tissue regeneration and improve the osteoinductive bone remodeling process . As a third-generation platelet concentrate , concentrate growth factor (CGF) includes a variety of growth factors, such as platelet-derived growth factors (PDGFs), transforming growth factor (TGF) b1 and b2, fibroblast growth factors (FGFs), vascular endothelial growth factors (VEGFs), and insulin-like growth factors (IGFs), which have a positive role in promoting cell proliferation, matrix remodeling, and angiogenesis . In addition, they are widely used in oral and maxillofacial surgery procedures to promote tissue regeneration and repair owing to their convenient collection methods and risk-free clinical application . CGFs exist in different forms, and were classified into two categories based on their state of existence: gel-phase concentrated growth factor (GPCGF) and liquid-phase concentrated growth factor (LPCGF). At present, GPCGF are used more frequently in the field of hard tissue regeneration, while LPCGF have been used in wound healing, periodontal tissue regeneration, and the treatment of temporomandibular joint disorders because of their excellent mobility . Platelet concentrate has the potential for use in GBR. Işık et al. concluded that Platelet-rich fibrin (PRF, the second-generation platelet concentrate) with Bovine-derived xenograft could successfully achieve peri-implant bone augmentation. Cheruvu et al. found that PRF membranes could enhance peri-implant soft tissue healing. Xie et al. found that CGF combined with bone powder particles could promote peri-implant bone regeneration compared to traditional GBR. However, investigations of the osteogenic effect of the combination of CGF and DBBM in implant placement and simultaneous GBR need to be supported by more research, and the research focused on the osteogenic effect of different forms of CGF is even more lacking. Most patients are more concerned about short-term postoperative adverse events. However, most of the studies have focused on objective indicators such as reconstruction and healing of the defective tissue and have neglected to document and investigate patients’ postoperative adverse reactions. This study also focused on the effect of different bone graft materials on the occurrence of postoperative adverse reactions in patients. The objective of the present study was to explore the bone regeneration effect of various forms of concentrated growth factor (GPCGF/LPCGF) when used in combination with DBBM for implant placement and simultaneous GBR and their impact on postoperative adverse reactions. The null hypotheses were that LPCGF would have the ability to promote the formation of new bone and to relieve postoperative adverse effects better than GPCGF.
Study design The study was designed as a parallel single-blinded randomized controlled clinical trial and was performed in accordance with the guiding principles of the Declaration of Helsinki revised in 2013 and was ethically approved by the committee of Affiliated Stomatological Hospital of Chongqing Medical University (No.2020-003). The study was registered with the Chinese Clinical Trial Registry (ChiCTR2300070107). Power calculation G*power3.1.9.7(University of Düsseldorf, Düsseldorf, Germany) was used to calculate the sample size. For the power analysis, the primary outcome was the buccal lateral bone thickness variation from the immediate postoperative period to 6 months postoperatively. Based on the preliminary experiment results, an effect size of 0.45 was obtained. The significance level was set at 0.05, and the power (1-β) at 0.8. The sample size was calculated to be 51. Taking into account a 10% loss to follow-up rate, we increased the sample size to 57 (19 per group). Eligibility criteria Fifty-seven patients with bone defects who were admitted to the Affiliated Stomatological Hospital of Chongqing Medical University between April 2023 and July 2023 were recruited for the study. The inclusion criteria were as follows: Aged > 18 years. Teeth were removed at least 3 months before surgery. A single anterior tooth or premolar is missing, accompanied by a horizontal bone defect suitable for concurrent GBR for implantation, with a residual alveolar bone width>4 mm and ≤ 5 mm . No systemic disease that could affect bone healing or render the patient unsuitable for dental surgery. Periodontal health. The exclusion criteria were as follows: Inability to understand the experimental content or withdrawal in the middle of the experiment. Local or systemic contraindications for implant surgery. Severe periodontitis or poor oral hygiene. Heavy smokers (≥ 10 cigarettes per day). An informed consent form was signed by each included patient for participation in the study. Randomization and allocation concealment Simple randomized grouping was adopted in the present study. The patients were grouped by computer-generated random numbers and the grouping information was hidden in sealed envelopes. The envelopes were opened by the operator prior to surgery. Patients were randomly assigned to the control group, GPCGF group, and LPCGF group, with 19 individuals in each group. The control, GPCGF and LPCGF groups were subjected to GBR using DBBM, GPCGF-DBBM mixture and LPCGF-DBBM mixture, respectively. Individuals who were unfamiliar with the trial served as outcome evaluators, data monitors, and statistical analysts. GPCGF/LPCGF preparation Of venous blood, 9 mL was drawn into sterile vacuum centrifuge tubes of two types (Greiner Bio-One, GmbH, Kremsmünster, Austria): one with serum clot activator (red centrifuge tube, 454092) for gelatinous GPCGF, and the other with no additive (white centrifuge tube, 4550001) for LPCGF. The samples underwent immediate centrifugation (Medifuge, Silfradenstr, S. Sofia, Italy) following a variable-speed centrifugation procedure: 30 s at 2700 rpm, 2 mins at 2400 rpm, 4 mins at 3000 rpm, and finally, 36 s of deceleration to complete the separation, with a total centrifugation duration of 12 mins . After centrifugation of the red tube, three distinct layers were observed: the erythrocyte layer, the GPCGF layer, and the serum layer. The second GPCGF layer was extracted using sterile scissors to obtain the gelatinous GPCGF. The centrifugation of the white tube resulted in three separate layers: the erythrocyte layer, the LPCGF layer, and the platelet-poor plasma layer. The LPCGF were collected in a 5 ml disposable sterile syringe . Following the manufacturer’s instructions, the GPCGF was pressed into a membrane and cut into small particles. The GPCGF was mixed with 0.25 g of DBBM (Bio-Oss, Geistlich Pharma AG, Wolhusen, Switzerland) and then placed in a blender (Roundup, Silfradenstsr, S. Sofia, Italy) for 15 s to obtain sticky bone which is the GPCGF-DBBM mixture. Mixing the LPCGF with 0.25 g of DBBM to obtain the sticky bone which is the LPCGF-DBBM mixture. (Fig. ) Surgical procedure The surgeries were performed by experienced clinicians who were qualified for implant bone augmentation surgery. The surgeries were performed at the Affiliated Stomatological Hospital of Chongqing Medical University. Prior to surgery, patients gargled using 0.12% chlorhexidine for 1 min, and routine disinfection of the face was performed. 4%Articaine with 1/100,000 epinephrine was used for local anesthesia during the procedure. A mucoperiosteal triangular flap was elevated at the surgical site and extended to both adjacent teeth. Under digital guidance, the implant sites were prepared in the surgical area and implant placement was completed according to the instructions of the manufacturers. Implants were inserted while maintaining a minimum thickness of 1 mm for the lingual cervical bone and were placed in a prosthetically desirable position. Intraoperatively, dehiscence of the lateral labial bone due to the presence of a horizontal bone defect was observed, which resulted in exposure to the implant. An overlay screw was attached to the implant. In order to increase the blood supply to the area of bone regeneration, a round drill was used to drill holes in the bone cortical on the buccal side of the implant area. The sticky bone (DBBM, GPCGF-DBBM mixture or LPCGF-DBBM mixture) was applied to the bone deficiency, which raised the buccal lateral bone thickness by at least 2 mm . The bone deficiency was subsequently covered with collagen membrane (Bio-Gide, Geistlich Pharma AG, Wolhusen, Switzerland) and finally sutured. (Fig. ). Outcome measures CBCT (Kavo-3D-eXam High-Quality CBCT, Kavo, USA; 80 mA, 80 kVp, and 8.9-s scan time) was performed under the same predicted conditions immediately postoperative and 6 months postoperatively. All CBCT scans were performed by senior radiologists on the same machine. Imaging analysis was performed using the 3D image software (KaVo 3D eXam Vision, Kavo, USA). The same anatomical structure and implant structure were used as reference points during measurement to match images immediately after surgery and 6 months after surgery . Three horizontal lines were generated perpendicular to the central axis of the implant. Buccal lateral bone thickness was measured at three different levels, which were 2 mm (L1), 4 mm (L2), and 6 mm (L3) apical to the implant shoulder at immediately postoperative and at 6 months postoperatively on CBCTs . The same trained and experienced researcher performed all measurements, and the measurements were repeated 3 times at all sites and averaged. (Fig. ) The sutures were removed during a one-week follow-up visit after surgery to evaluate healing. Each day during the first postoperative week, the patients completed questionnaires, which were subsequently used for assessing postoperative limitations. The data on the postoperative limitations of mouth opening, swelling, chewing, speaking, sleeping, daily routines and bleeding were provided by the patients through questionnaires . Postoperative adverse effects were measured by the five-point numerical rating scales (NRS), in which 0 indicated ‘not at all’ and 4 indicated ‘very’, with the severity increasing from 1 to 4. The patients were then asked whether they had used any painkillers every day after the surgery. Finally, visual analog scale (VAS) scores were used to measure oral pain . Statistical analyses IBM SPSS statistics 22.0 (IBM Corp., New York, USA) was used to conduct the statistical analysis of the resulting data. The differences among groups were analyzed by one-way ANOVA, and post hoc tests were analyzed using the Tukey test. The NRS scores and VAS scores for postoperative adverse effects were not normally distributed, so they were analyzed using the Kruskal-Wallis test. The means, standard deviations (SD), and 95% confidence intervals (95% CI) of the data are reported. All the data were summarized using descriptive statistics, and p < 0.05 was used as the threshold for statistical significance.
The study was designed as a parallel single-blinded randomized controlled clinical trial and was performed in accordance with the guiding principles of the Declaration of Helsinki revised in 2013 and was ethically approved by the committee of Affiliated Stomatological Hospital of Chongqing Medical University (No.2020-003). The study was registered with the Chinese Clinical Trial Registry (ChiCTR2300070107).
G*power3.1.9.7(University of Düsseldorf, Düsseldorf, Germany) was used to calculate the sample size. For the power analysis, the primary outcome was the buccal lateral bone thickness variation from the immediate postoperative period to 6 months postoperatively. Based on the preliminary experiment results, an effect size of 0.45 was obtained. The significance level was set at 0.05, and the power (1-β) at 0.8. The sample size was calculated to be 51. Taking into account a 10% loss to follow-up rate, we increased the sample size to 57 (19 per group).
Fifty-seven patients with bone defects who were admitted to the Affiliated Stomatological Hospital of Chongqing Medical University between April 2023 and July 2023 were recruited for the study. The inclusion criteria were as follows: Aged > 18 years. Teeth were removed at least 3 months before surgery. A single anterior tooth or premolar is missing, accompanied by a horizontal bone defect suitable for concurrent GBR for implantation, with a residual alveolar bone width>4 mm and ≤ 5 mm . No systemic disease that could affect bone healing or render the patient unsuitable for dental surgery. Periodontal health. The exclusion criteria were as follows: Inability to understand the experimental content or withdrawal in the middle of the experiment. Local or systemic contraindications for implant surgery. Severe periodontitis or poor oral hygiene. Heavy smokers (≥ 10 cigarettes per day). An informed consent form was signed by each included patient for participation in the study.
Simple randomized grouping was adopted in the present study. The patients were grouped by computer-generated random numbers and the grouping information was hidden in sealed envelopes. The envelopes were opened by the operator prior to surgery. Patients were randomly assigned to the control group, GPCGF group, and LPCGF group, with 19 individuals in each group. The control, GPCGF and LPCGF groups were subjected to GBR using DBBM, GPCGF-DBBM mixture and LPCGF-DBBM mixture, respectively. Individuals who were unfamiliar with the trial served as outcome evaluators, data monitors, and statistical analysts.
Of venous blood, 9 mL was drawn into sterile vacuum centrifuge tubes of two types (Greiner Bio-One, GmbH, Kremsmünster, Austria): one with serum clot activator (red centrifuge tube, 454092) for gelatinous GPCGF, and the other with no additive (white centrifuge tube, 4550001) for LPCGF. The samples underwent immediate centrifugation (Medifuge, Silfradenstr, S. Sofia, Italy) following a variable-speed centrifugation procedure: 30 s at 2700 rpm, 2 mins at 2400 rpm, 4 mins at 3000 rpm, and finally, 36 s of deceleration to complete the separation, with a total centrifugation duration of 12 mins . After centrifugation of the red tube, three distinct layers were observed: the erythrocyte layer, the GPCGF layer, and the serum layer. The second GPCGF layer was extracted using sterile scissors to obtain the gelatinous GPCGF. The centrifugation of the white tube resulted in three separate layers: the erythrocyte layer, the LPCGF layer, and the platelet-poor plasma layer. The LPCGF were collected in a 5 ml disposable sterile syringe . Following the manufacturer’s instructions, the GPCGF was pressed into a membrane and cut into small particles. The GPCGF was mixed with 0.25 g of DBBM (Bio-Oss, Geistlich Pharma AG, Wolhusen, Switzerland) and then placed in a blender (Roundup, Silfradenstsr, S. Sofia, Italy) for 15 s to obtain sticky bone which is the GPCGF-DBBM mixture. Mixing the LPCGF with 0.25 g of DBBM to obtain the sticky bone which is the LPCGF-DBBM mixture. (Fig. )
The surgeries were performed by experienced clinicians who were qualified for implant bone augmentation surgery. The surgeries were performed at the Affiliated Stomatological Hospital of Chongqing Medical University. Prior to surgery, patients gargled using 0.12% chlorhexidine for 1 min, and routine disinfection of the face was performed. 4%Articaine with 1/100,000 epinephrine was used for local anesthesia during the procedure. A mucoperiosteal triangular flap was elevated at the surgical site and extended to both adjacent teeth. Under digital guidance, the implant sites were prepared in the surgical area and implant placement was completed according to the instructions of the manufacturers. Implants were inserted while maintaining a minimum thickness of 1 mm for the lingual cervical bone and were placed in a prosthetically desirable position. Intraoperatively, dehiscence of the lateral labial bone due to the presence of a horizontal bone defect was observed, which resulted in exposure to the implant. An overlay screw was attached to the implant. In order to increase the blood supply to the area of bone regeneration, a round drill was used to drill holes in the bone cortical on the buccal side of the implant area. The sticky bone (DBBM, GPCGF-DBBM mixture or LPCGF-DBBM mixture) was applied to the bone deficiency, which raised the buccal lateral bone thickness by at least 2 mm . The bone deficiency was subsequently covered with collagen membrane (Bio-Gide, Geistlich Pharma AG, Wolhusen, Switzerland) and finally sutured. (Fig. ).
CBCT (Kavo-3D-eXam High-Quality CBCT, Kavo, USA; 80 mA, 80 kVp, and 8.9-s scan time) was performed under the same predicted conditions immediately postoperative and 6 months postoperatively. All CBCT scans were performed by senior radiologists on the same machine. Imaging analysis was performed using the 3D image software (KaVo 3D eXam Vision, Kavo, USA). The same anatomical structure and implant structure were used as reference points during measurement to match images immediately after surgery and 6 months after surgery . Three horizontal lines were generated perpendicular to the central axis of the implant. Buccal lateral bone thickness was measured at three different levels, which were 2 mm (L1), 4 mm (L2), and 6 mm (L3) apical to the implant shoulder at immediately postoperative and at 6 months postoperatively on CBCTs . The same trained and experienced researcher performed all measurements, and the measurements were repeated 3 times at all sites and averaged. (Fig. ) The sutures were removed during a one-week follow-up visit after surgery to evaluate healing. Each day during the first postoperative week, the patients completed questionnaires, which were subsequently used for assessing postoperative limitations. The data on the postoperative limitations of mouth opening, swelling, chewing, speaking, sleeping, daily routines and bleeding were provided by the patients through questionnaires . Postoperative adverse effects were measured by the five-point numerical rating scales (NRS), in which 0 indicated ‘not at all’ and 4 indicated ‘very’, with the severity increasing from 1 to 4. The patients were then asked whether they had used any painkillers every day after the surgery. Finally, visual analog scale (VAS) scores were used to measure oral pain .
IBM SPSS statistics 22.0 (IBM Corp., New York, USA) was used to conduct the statistical analysis of the resulting data. The differences among groups were analyzed by one-way ANOVA, and post hoc tests were analyzed using the Tukey test. The NRS scores and VAS scores for postoperative adverse effects were not normally distributed, so they were analyzed using the Kruskal-Wallis test. The means, standard deviations (SD), and 95% confidence intervals (95% CI) of the data are reported. All the data were summarized using descriptive statistics, and p < 0.05 was used as the threshold for statistical significance.
Research population Fifty-seven patients were recruited for the trial, all of whom participated throughout the trial. 19 were treated with conventional DBBM, 19 with GPCGF + DBBM, and 19 with LPCGF + DBBM, and all patients were evaluated by CBCT after implant placement as well as 6 months after implantation. Figure shows CBCT images of three patients, one patient treated with DBBM, one patient treated with GPCGF + DBBM, and one patient treated with LPCGF + DBBM. This study is reported according to the CONSORT guidelines, and Fig. provides a CONSORT flowchart illustrating the study design. The present trial included 57 patients (29 males and 28 females, age range 19–76 years, mean age 44.6 ± 14.4 years). No statistically significant differences were observed in the general statistics for sex, age, implant site, or different implant brands among the three groups ( p > 0.05, Table ). In all 57 GBR procedures (57 implants in total), each implant was radiologically confirmed to have healed uneventfully. Buccal lateral bone thickness In the immediate postoperative period, the buccal lateral bone thickness (mean ± SD) at the three measurement levels were 3.41 ± 0.66 mm(L1), 3.74 ± 0.79 mm(L2), and 3.77 ± 0.87 mm(L3) in the control group; 2.95 ± 0.72 mm(L1), 3.13 ± 0.89 mm(L2) and 3.23 ± 0.93 mm(L3) in the GPCGF group; and 3.24 ± 0.76 mm(L1), 3.58 ± 0.87 mm(L2) and 3.79 ± 1.02 mm(L3) in the LPCGF group, without any significant difference between the buccal lateral bone thickness of the three groups ( p > 0.05) (Table ). The period immediately after surgery, the buccal lateral bone thickness in the GPCGF group, the LPCGF group, and the control group decreased to different degrees at 6 months after surgery. In the control group, the changes(mean ± SD) were 0.98 ± 0.39 mm(ΔL1), 0.87 ± 0.43 mm(ΔL2) and 0.78 ± 0.51 mm(ΔL3); in the GPCGF group, the changes were 0.45 ± 0.28 mm(ΔL1), 0.39 ± 0.32 mm(ΔL2), and 0.27 ± 0.41 mm(ΔL3); and in the LPCGF group, the changes were 0.74 ± 0.51 mm(ΔL1), 0.60 ± 0.64 mm(ΔL2), and 0.50 ± 0.50 mm(ΔL3). The results of one-way ANOVA showed that the change reached a level of statistical significance in all three groups (ΔL1: p < 0.001; ΔL2: p = 0.013; ΔL3: p = 0.004) (Fig. ). Based on the results of the Tukey test, compared to those in the control group, the changes in buccal lateral bone thickness at 6 months postoperatively were smaller in the GPCGF group (ΔL1: p < 0.001; ΔL2: p = 0.009; ΔL3: p = 0.003), while no statistically significant difference existed between the LPCGF and control groups ( p > 0.05). Furthermore, no significant difference in the change in buccal lateral bone thickness was observed between the GPCGF group and the LPCGF group. (Fig. ) Postoperative adverse effects The data presented in Fig. revealed that within one week after surgery, statistically significant differences could be observed in restricted bleeding, mouth opening, chewing, sleeping speaking, daily routine, and pain ( p < 0.05). In terms of postoperative pain, the patients in the GPCGF group exhibited lower pain scores than those in the control group on days 0–4 (day0: p = 0.004; day1: p = 0.004; day2: p = 0.019; day3: p = 0.004; day4: p = 0.024.) (Fig. -a) In terms of postoperative bleeding, the GPCGF group had significantly lower NRS scores than the control group on days 2–6 (day2: p = 0.017; day3: p = 0.008; day4: p = 0.007; day5: p = 0.021; day6: p = 0.043), whereas the LPCGF group showed a significant difference from the control group on day 5 ( p = 0.005). The GPCGF group exhibited a lower bleeding NRS score than LPCGF group on day 5 ( p <0.001). (Fig. -b) In terms of speaking, the adverse effects in the GPCGF group were less pronounced than those in the control group on days 0, 1, 3, 4, 5, and 6 (day0: p = 0.006; day1: p = 0.027; day3: p = 0.034; day4: p = 0.008; day5: p = 0.028; day6: p = 0.005), and the GPCGF group outperformed the LPCGF group in speaking ability on day 5 ( p = 0.039). (Fig. -c) Compared with the control group, the GPCGF group showed milder restrictions on mouth opening on days 4 and 5 (day4: p = 0.029; day5: p = 0.044), and the LPCGF group showed milder restrictions on mouth opening on day 5 ( p = 0.044). (Fig. -d) The GPCGF group experienced less chewing discomfort than did the control group from days 4–6 (day4: p = 0.021; day5: p = 0.039; day6: p = 0.037). (Fig. -e) The GPCGF group exhibited reduced sleep impacts on days 0 and 4 (day0: p = 0.030; day4: p = 0.002) (Fig. -f) and the GPCGF group demonstrated a statistically notable variance in daily living scores solely on day 4 ( p = 0.028) when contrasted with the control group (Fig. -g).
Fifty-seven patients were recruited for the trial, all of whom participated throughout the trial. 19 were treated with conventional DBBM, 19 with GPCGF + DBBM, and 19 with LPCGF + DBBM, and all patients were evaluated by CBCT after implant placement as well as 6 months after implantation. Figure shows CBCT images of three patients, one patient treated with DBBM, one patient treated with GPCGF + DBBM, and one patient treated with LPCGF + DBBM. This study is reported according to the CONSORT guidelines, and Fig. provides a CONSORT flowchart illustrating the study design. The present trial included 57 patients (29 males and 28 females, age range 19–76 years, mean age 44.6 ± 14.4 years). No statistically significant differences were observed in the general statistics for sex, age, implant site, or different implant brands among the three groups ( p > 0.05, Table ). In all 57 GBR procedures (57 implants in total), each implant was radiologically confirmed to have healed uneventfully.
In the immediate postoperative period, the buccal lateral bone thickness (mean ± SD) at the three measurement levels were 3.41 ± 0.66 mm(L1), 3.74 ± 0.79 mm(L2), and 3.77 ± 0.87 mm(L3) in the control group; 2.95 ± 0.72 mm(L1), 3.13 ± 0.89 mm(L2) and 3.23 ± 0.93 mm(L3) in the GPCGF group; and 3.24 ± 0.76 mm(L1), 3.58 ± 0.87 mm(L2) and 3.79 ± 1.02 mm(L3) in the LPCGF group, without any significant difference between the buccal lateral bone thickness of the three groups ( p > 0.05) (Table ). The period immediately after surgery, the buccal lateral bone thickness in the GPCGF group, the LPCGF group, and the control group decreased to different degrees at 6 months after surgery. In the control group, the changes(mean ± SD) were 0.98 ± 0.39 mm(ΔL1), 0.87 ± 0.43 mm(ΔL2) and 0.78 ± 0.51 mm(ΔL3); in the GPCGF group, the changes were 0.45 ± 0.28 mm(ΔL1), 0.39 ± 0.32 mm(ΔL2), and 0.27 ± 0.41 mm(ΔL3); and in the LPCGF group, the changes were 0.74 ± 0.51 mm(ΔL1), 0.60 ± 0.64 mm(ΔL2), and 0.50 ± 0.50 mm(ΔL3). The results of one-way ANOVA showed that the change reached a level of statistical significance in all three groups (ΔL1: p < 0.001; ΔL2: p = 0.013; ΔL3: p = 0.004) (Fig. ). Based on the results of the Tukey test, compared to those in the control group, the changes in buccal lateral bone thickness at 6 months postoperatively were smaller in the GPCGF group (ΔL1: p < 0.001; ΔL2: p = 0.009; ΔL3: p = 0.003), while no statistically significant difference existed between the LPCGF and control groups ( p > 0.05). Furthermore, no significant difference in the change in buccal lateral bone thickness was observed between the GPCGF group and the LPCGF group. (Fig. )
The data presented in Fig. revealed that within one week after surgery, statistically significant differences could be observed in restricted bleeding, mouth opening, chewing, sleeping speaking, daily routine, and pain ( p < 0.05). In terms of postoperative pain, the patients in the GPCGF group exhibited lower pain scores than those in the control group on days 0–4 (day0: p = 0.004; day1: p = 0.004; day2: p = 0.019; day3: p = 0.004; day4: p = 0.024.) (Fig. -a) In terms of postoperative bleeding, the GPCGF group had significantly lower NRS scores than the control group on days 2–6 (day2: p = 0.017; day3: p = 0.008; day4: p = 0.007; day5: p = 0.021; day6: p = 0.043), whereas the LPCGF group showed a significant difference from the control group on day 5 ( p = 0.005). The GPCGF group exhibited a lower bleeding NRS score than LPCGF group on day 5 ( p <0.001). (Fig. -b) In terms of speaking, the adverse effects in the GPCGF group were less pronounced than those in the control group on days 0, 1, 3, 4, 5, and 6 (day0: p = 0.006; day1: p = 0.027; day3: p = 0.034; day4: p = 0.008; day5: p = 0.028; day6: p = 0.005), and the GPCGF group outperformed the LPCGF group in speaking ability on day 5 ( p = 0.039). (Fig. -c) Compared with the control group, the GPCGF group showed milder restrictions on mouth opening on days 4 and 5 (day4: p = 0.029; day5: p = 0.044), and the LPCGF group showed milder restrictions on mouth opening on day 5 ( p = 0.044). (Fig. -d) The GPCGF group experienced less chewing discomfort than did the control group from days 4–6 (day4: p = 0.021; day5: p = 0.039; day6: p = 0.037). (Fig. -e) The GPCGF group exhibited reduced sleep impacts on days 0 and 4 (day0: p = 0.030; day4: p = 0.002) (Fig. -f) and the GPCGF group demonstrated a statistically notable variance in daily living scores solely on day 4 ( p = 0.028) when contrasted with the control group (Fig. -g).
This study explored the clinical outcomes of various forms of concentrated growth factors when used in combination with DBBM for simultaneous implant-guided bone regeneration and their impact on postoperative adverse reactions. Compared with DBBM applied alone, GPCGF combined with DBBM as a bone grafting material for the implantation of contemporaneous GBR can achieve better bone regeneration results. In addition, the use of CGF is effective in reducing adverse reactions in patients after surgery. LPCGF is not as effective as GPCGF in mitigating certain adverse effects (bleeding and speaking). Thus, the null hypothesis of this research was rejected. DBBM is a commonly used graft material, which is an inorganic component in bovine bone, and the main component is porous hydroxyapatite. Since the porous structure of DBBM is similar to that of human bone, it can provide good support for new bone formation. In addition, the porous structure of DBBM can also provide a large surface area to promote the formation of new blood vessels, thereby promoting osteogenesis . DBBM can not only reduce patient trauma and simplify surgical procedures, but also achieve the osteogenic effect that is close to or even better than autogenous bone transplantation and is widely used in GBR, maxillary sinus lifting, and alveolar ridge preservation . On the premise of retaining the original advantages of DBBM, the mixture of CGF and DBBM adds growth factors that promote cell proliferation differentiation and angiogenesis, and its osteogenic effect has also been confirmed by a large number of studies . Our study showed that at 6 months postoperatively, the amount of change in buccal lateral bone thickness was less in the GPCGF group than in the control group. Currently, GPCGF is the most widely used form of CGF, and most CGF studies have focused on GPCGF. With regard to bone tissue regeneration, our findings closely agreed with those of Dai et al. , who demonstrated that GPCGF has a positive role in promoting bone defect repair in implant-contemporaneous GBR. Although there are few studies on the application of GPCGF in contemporary GBR implantation, several studies have confirmed its potential to enhance bone tissue regeneration in dentistry. In alveolar preservation, the use of GPCGF helps preserve the horizontal width and height of the alveolar ridge while stimulating new bone growth . Ghasemirad et al. utilized GPCGF for maxillary sinus floor augmentation surgery and observed a greater osteogenesis rate in the GPCGF group than in the control group at 6 months post-surgery. The potential of CGFs to promote bone regeneration involves fibrin networks containing platelets, leukocytes, and growth factors. Fibroblasts are involved in angiogenesis and tissue remodeling, while endothelial cells provide the matrix for cell migration. Platelets are particularly significant because they produce large amounts of bioactive proteins that promote cell morphogenesis, growth, and recruitment . According to previous studies, CGF contains CD34-positive cells, as well as TGF-β1 and VEGF, and these growth factors exert favorable effects on blood vessel maintenance, neovascularization, and anti-adhesive angiogenesis . In addition, these factors stimulate the differentiation of mesenchymal stem cells into osteogenic cells, promote cell proliferation and migration, prevent bone resorption, and accelerate bone tissue repair through the regulation of gene expression . In preparing LPCGF, we opted for white centrifuge tubes without any additional ingredients instead of the green tubes recommended, which contained sodium heparin. We were concerned that this component could hinder the osteogenic effect. Before this study, we hypothesized that LPCGF with a liquid morphology and without any additional ingredients could eliminate other extraneous components and establish a more uniform and effective connection with BMMD, thereby enhancing osteogenesis. Although the osteogenic effect of the LPCGF group was not statistically different from the control group, it is shown by our findings that the mean resorption of buccal bone thickness in the LPCGF group was less than that in the control group in the postoperative period, and we hypothesized that there was some osteogenic effect of the LPCGF, but it was just not as pronounced as that of the GPCGF, so much so as to fail to produce a statistically significant difference. LPCGF did not play a sufficiently active role in bone regeneration in the implantation of contemporaneous GBR as GPCGF for the following possible reasons. The osteogenic mechanism of GPCGF is that various growth factors contained in GPCGF can promote the proliferation of fibroblasts, osteoblasts and endothelial cells, stimulate the production of extracellular matrix (such as collagen and elastin), and promote the regression of inflammation, inhibit excessive inflammation, and create a positive environment for tissue regeneration . VEGF and other growth factors play an active role in stimulating new angiogenesis . Although LPCGF also contains growth factors that promote tissue regeneration, such as TGF-β and VEGF , the levels of growth factors in LPCGF may be lower than those of GPCGF . Ma et al. pointed out that compared with LPCGF, GPCGF can more effectively enhance the proliferation and migration of bone marrow mesenchymal stem cells, and also has better osteogenic induction activity and stability. In addition, the main difference between GPCGF and LPCGF in the preparation process is that GPCGF uses centrifuge tubes containing coagulation promoters, while the centrifuge tubes of LPCGF do not use any additives, and the coagulation promoter of the centrifuge tubes of GPCGF used in the present experiments is silica. Silicon is an important trace element in the human body. It promotes bone calcification, increases bone density, and promotes bone growth . Silica nanoparticles are also widely used as drug delivery systems to promote bone regeneration . Although silica was mainly used as a procoagulant coated on the walls of the centrifuge tubes in this experiment, and its content in GPCGF was probably low, it may still play an active role in the promotion of osteogenesis in GPCGF. Most patients treated with implants experience postoperative adverse reactions . To investigate the kinds of materials that would provide better relief for patients’ postoperative discomfort, we used the NRS and the VAS to investigate the adverse reactions of patients at 7 days postoperatively. In the present study, at the beginning of the postoperative week, all patients experienced varying degrees of discomfort and limited activity. The use of GPCGF reduces the postoperative adverse effects of pain, bleeding, speaking, mouth opening, chewing, sleep, and daily routine after surgery for patients. There are few reports on the use of GPCGF to alleviate postoperative adverse effects on simultaneous implant GBR, but some studies using GPCGF in other oral treatments have shown similar results to our study . Lu et al. noted that the application of GPCGF can reduce pain and accelerate healing after mandibular impacted wisdom tooth extraction. Koyuncu et al. also reported that patients in the GPCGF group experienced less postoperative discomfort in the first seven days. Compared with the control group, the study revealed that GPCGF was more effective in reducing postoperative adverse effects after surgery. This difference primarily to the ability of GPCGF to promote the healing and regeneration of soft tissues at surgical sites . The beneficial effects of GPCGF on mouth opening restriction, mastication, sleep, and daily living were primarily observed between postoperative days 4 and 6. This may be linked to the soft tissue healing process. Days 4–6 are in the angiogenic phase , during which the proangiogenic and tissue migration abilities of growth factors in CGF start to increase, leading to improved healing with fewer adverse effects compared to those in the control group. By day 7 post-procedure, only a few patients still experienced mild adverse effects, and the difference in tissue healing was no longer evident through adverse effects. Compared with the control group, the efficacy of LPCGF in reducing postoperative patient discomfort was demonstrated by bleeding on day 5 and mouth opening on day 5 after surgery. Currently, research on LPCGF has focused primarily on soft tissues. Yu et al. demonstrated the ability of LPCGF to promote the proliferation, migration, and differentiation of human dental pulp cells. Zhan et al. applied LPCGF to the surface of roots affected by periodontitis and found that it promoted cell attachment, growth, migration, and differentiation. Because of the potential of LPCGF in soft tissue regeneration, it can be hypothesized that LPCGF may be able to promote the healing of gingival tissues and thus achieve relief of adverse postoperative reactions. Compared with GPCGF, LPCGF has no desirable role in reducing certain postoperative adverse effects(bleeding and speaking), which could be attributed to the following reasons. Firstly, as a result of differences in internal bracket construction, LPCGF has a higher rate of excretion and a worse gradual release rate during the same interval . Furthermore, the fibronectin structure in GPCGF is richer and denser, which may provide an improved scaffold for tissue reconstruction. Secondly, the growth factor content in GPCGF and its ability to promote cell growth and migration were considered superior to LPCGF in some studies . Thirdly, since no obvious boundary between the LPCGF layer and the PPP layer exists, some of the liquid with less growth factor content may be extracted during the extraction of LPCGF, which may lead to the deviation of the growth factor content in LPCGF. Moreover, GPCGF was placed in a special mixer when it was mixed with DBBM, and the content of GPCGF per unit of DBBM could be more uniform and better than that of the artificial mixture of LPCGF. Although more research is needed to explore the prospects for the application of LPCGF in GBR, we believe that GPCGF is more advantageous in this regard. Our study also has several limitations. Firstly, to honor the preferences of the patients and accommodate the individual alveolar bone conditions of the patients, the brand and size of the implants were not standardized, although this variation was maintained within a specific range. Another limitation was that the study did not include more objective clinical outcomes, such as implant stability. In addition, the conclusions above were only conjectured based on the present study, and although the experiment achieved clinical osseointegration, no histological analysis was conducted to reveal the characteristics of the tissue in contact with the implant. The application of different forms of CGF in other implant surgeries (e.g., internal maxillary sinus lift, external maxillary sinus lift, etc.) also needs to be explored with more experiments. Moreover, a follow-up after implant loading may allow for a more comprehensive evaluation of different treatment options by assessing bone resorption after implant loading.
From randomized clinical trials, we have drawn the following conclusions: In promoting bone regeneration after GBR, the GPCGF-DBBM mixture was superior to the DBBM applied alone. In relieving certain postoperative symptoms, the CGF-DBBM mixture was superior to the DBBM, in which the GPCGF-DBBM mixture was superior to the LPCGF-DBBM mixture(bleeding and speaking).
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Impact of perceived factors of coronavirus infection on COVID-19 vaccine uptake among healthcare workers in Ghana—Evidence from a cross-sectional analysis | acaa0150-e5d6-470c-9386-32dfa2aafcaf | 11819584 | Vaccination[mh] | According to the World Health Organization (WHO), COVID-19 is no longer a pandemic and is no longer considered a disease of public health emergency. However, COVID-19 infection-related deaths are still recorded globally . As of October 18, 2023, 0.77 billion infections have been confirmed, with nearly 7 million deaths worldwide . Europe and the Western Pacific were the hardest hit during the pandemic peak periods, with confirmed cases in the region of 276 million and 207 million respectively. Although Africa recorded relatively low figures as compared to other parts of the world, the numbers nonetheless were staggering, with nearly 13 million positive cases and 258,562 deaths by the time the pandemic was no longer considered a disease of public health emergency . Ghana confirmed its first case of COVID-19 in March 2020 and reported 171,160 laboratory-confirmed cases as of February 20, 2023 . Following the insurgence of the novel coronavirus, the scientific community managed to produce a vaccine that has now proven useful in curbing the spread of COVID-19 or at least mitigating the severity of the virus. Although COVID-19 is no longer considered a disease of public health emergency, vaccine distribution and uptake remain a public health interest . As of October 23, 2023, 13.5 million vaccine doses have been administered worldwide . The vaccine has proven helpful in slowing the infection transmission since the first dose was inoculated in New York . Given the initial limited supply of vaccines in the global community, particularly in lower- and middle-income countries, a select few had the privilege of being vaccinated first. Healthcare workers were among the initial beneficiaries of the COVID-19 vaccination project globally, considering their frontline role in combating COVID-19 particularly during the peak periods . Given the peculiarity of their work, healthcare professionals suffered infection as well, and literature has shown that they were among some of the most affected groups since the emergence of the pandemic, thus, close attention was given to them to boost their chances of a win in the fight against the pandemic. At the peak of the pandemic, nearly 3000 health workers were infected in Ghana, resulting in 11 deaths . Literature suggest that Ghana had vaccinated 38.40% of its citizens as at January 15, 2023, yet, this was not without controversies and myths on COVID-19 vaccine uptake . As a consequence, vaccine hesitancy persisted even among healthcare workers, thus posing a threat to vaccine uptake and the chances of achieving herd immunity . Elsewhere in South Africa and Zimbabwe, studies have shown that less than 50% of the targeted population were willing to take up the COVID-19 vaccine, citing doubts about the vaccine’s usefulness and safety . Other studies reported mixed views of COVID-19 vaccine acceptance among health professionals in Ghana . Whereas the literature on COVID-19 vaccine uptake remains scanty and narrowed, vaccine uptake despondency among trained healthcare workers is also worrying. In March 2022, an analysis from the Ghana District Health Information Management System (DHIMS) showed that out of the 31,238 members of the general population in the Mampong municipality of Ghana who took the first dose of the COVID-19 vaccine, 19,237 people including healthcare workers failed to take a second dose, despite their direct involvement in the vaccination programmes and campaigns. At the emergence of the coronavirus pandemic, speculations and conspiracy theories, particularly in lower- and middle-income countries, varied constantly and coupled with the limited knowledge of COVID-19 at the its early stages, vaccine uptake probably is negatively impacted.. Therefore, this study aimed to assess perceived factors affecting COVID-19 vaccine uptake among healthcare workers in Ghana. In the current approach, we adapted an existing framework constructed by Azaare et al (2020) to design the current study to determine the association between COVID-19 vaccine uptake and perceived factors related to COVID-19 infection such as the perceived seriousness of COVID-19, risk of infection, trust in the recommendation of experts, vaccine country of origin, perceived vaccine effectiveness and perceived vaccine safety, while accounting for confounding factors ( ).
Study design The study used a cross-sectional design and collected one-time data from healthcare workers in Mampong municipality of the Ashanti region of Ghana between April 24 and June 23, 2022. The study considered the uptake of the COVID-19 vaccine among healthcare workers across different cadres ( ) as a composite dependent variable. We then determined the factors of association listing common perceived factors such as perceived seriousness of COVID-19 infection, perceived vaccine safety, perceived country of origin, perceived risk of COVID-19 infection and trust in expert recommendation by the WHO or Ghana’s Ministry of Health (MOH). Data on participants’ previous medical history and sociodemographic characteristics, such as age, gender, religion, marital status, educational status, and area of residence were collected from the participants using an online Google form, and these were adjusted for in multiple logistic regression model ( ). Study settings Mampong municipality is a typical Ghanaian district which is predominantly Christian and relies on the Primary Health Care (PHC) concept and the Community-based Health Planning and Service (CHPS) initiative described as level 3 and level 4 categories of care respectively. The district is home to about 107,331 people according to the 2021 population and housing census. Mampong is bordered to the south by Sekyere-Dumasi, to the east by Sekyere South Municipal, and to the north by Sekyere Central District and has six PHC health centres and five CHPS compounds. The Mampong township also has five private health service providers with a health workforce of about 670. Sample size determination Yamane’s formula for random sampling depicted as n = N/(1 + Ne 2 ) was used in determining the sample size, given the known population size. Thus, n = number of samples, e = margin of error (5%), N = population size (670 health care workers). Therefore, 650/1 + 650 (0.05 2 ) = 250. We then estimated a 5% non-response rate and approximated a sample size of 262. Inclusion and exclusion criteria Healthcare workers 18 years and above and residents in the Mampong municipal were included in the study if they consented to participate. Healthcare workers who reported ill were excluded during the data collection process. Data collection We developed a structured self-administering questionnaire using Google surveys form and collected data electronically from the study participants. The health facilities within the municipality were grouped into strata, and the number of participants in each stratum was determined using a proportionate stratified random sampling technique. A simple random sampling method was then used to obtain the participants from each stratum using an equal number of folded pieces of paper with ‘yes’ or ‘no’ presented to eligible participants. Those who chose ‘yes’ were selected for the study and if they consented, a link to the questionnaire was shared with them in email, WhatsApp, Facebook or via mobile phone or computer. Respondents who required technological assistance were given. The Questionnaire (attached as supplementary file 1) focused on respondents’ perceptions of the risk of COVID-19 infection, perception of the seriousness of COVID-19, COVID-19 vaccination status, brand of COVID-19 vaccine, expert recommendation, medical history, and participants’ sociodemographic characteristics. Statistical analysis Responses were retrieved into Microsoft Excel version 19 and cleaned and subsequently transferred to SPSS version 25.0 for analysis. The analysis was examined for frequencies and proportions of the respondents’ characteristics. We also analysed for association between COVID-19 vaccine uptake and respondents’ socio-demographic characteristics such as age, gender, religion, marital status, education, area of residence, COVID-19 infection status, known comorbidity, and healthcare worker cadre using chi-square test statistic. We then adjusted for confounding using multiple logistic regression and checked for statistical significance, p < 0.05. The Manuscript was written and reported using the SQUIRE 2.0 checklist guidelines . Ethics approval and consent to participate This study first received approval from the graduate studies academic board of the School of Public Health, University for Development Studies, Tamale. The study further received ethical clearance from the Kwame Nkrumah University of Science and Technology Ethical Review Committee and referenced CHRPE/PA/142/22. Additionally, a formal letter of request for admittance was sent to the health directorate of the study site for permission to conduct the study. All respondents consented to participate in this study by responding to the Google questionnaire asking for their consent before gaining access to respond to the rest of the questionnaire. Respondents who did not consent were denied access to continue to respond to the questionnaire. No witnesses were required considering all respondents were above 18 years of age and considered adults as per the 1992 Constitution of the Republic of Ghana. Respondents retained their right to withdraw from the study at any time without prior notice to the researchers. The study design concealed respondents’ identities, and their opinions and values maintained.
The study used a cross-sectional design and collected one-time data from healthcare workers in Mampong municipality of the Ashanti region of Ghana between April 24 and June 23, 2022. The study considered the uptake of the COVID-19 vaccine among healthcare workers across different cadres ( ) as a composite dependent variable. We then determined the factors of association listing common perceived factors such as perceived seriousness of COVID-19 infection, perceived vaccine safety, perceived country of origin, perceived risk of COVID-19 infection and trust in expert recommendation by the WHO or Ghana’s Ministry of Health (MOH). Data on participants’ previous medical history and sociodemographic characteristics, such as age, gender, religion, marital status, educational status, and area of residence were collected from the participants using an online Google form, and these were adjusted for in multiple logistic regression model ( ).
Mampong municipality is a typical Ghanaian district which is predominantly Christian and relies on the Primary Health Care (PHC) concept and the Community-based Health Planning and Service (CHPS) initiative described as level 3 and level 4 categories of care respectively. The district is home to about 107,331 people according to the 2021 population and housing census. Mampong is bordered to the south by Sekyere-Dumasi, to the east by Sekyere South Municipal, and to the north by Sekyere Central District and has six PHC health centres and five CHPS compounds. The Mampong township also has five private health service providers with a health workforce of about 670.
Yamane’s formula for random sampling depicted as n = N/(1 + Ne 2 ) was used in determining the sample size, given the known population size. Thus, n = number of samples, e = margin of error (5%), N = population size (670 health care workers). Therefore, 650/1 + 650 (0.05 2 ) = 250. We then estimated a 5% non-response rate and approximated a sample size of 262.
Healthcare workers 18 years and above and residents in the Mampong municipal were included in the study if they consented to participate. Healthcare workers who reported ill were excluded during the data collection process.
We developed a structured self-administering questionnaire using Google surveys form and collected data electronically from the study participants. The health facilities within the municipality were grouped into strata, and the number of participants in each stratum was determined using a proportionate stratified random sampling technique. A simple random sampling method was then used to obtain the participants from each stratum using an equal number of folded pieces of paper with ‘yes’ or ‘no’ presented to eligible participants. Those who chose ‘yes’ were selected for the study and if they consented, a link to the questionnaire was shared with them in email, WhatsApp, Facebook or via mobile phone or computer. Respondents who required technological assistance were given. The Questionnaire (attached as supplementary file 1) focused on respondents’ perceptions of the risk of COVID-19 infection, perception of the seriousness of COVID-19, COVID-19 vaccination status, brand of COVID-19 vaccine, expert recommendation, medical history, and participants’ sociodemographic characteristics.
Responses were retrieved into Microsoft Excel version 19 and cleaned and subsequently transferred to SPSS version 25.0 for analysis. The analysis was examined for frequencies and proportions of the respondents’ characteristics. We also analysed for association between COVID-19 vaccine uptake and respondents’ socio-demographic characteristics such as age, gender, religion, marital status, education, area of residence, COVID-19 infection status, known comorbidity, and healthcare worker cadre using chi-square test statistic. We then adjusted for confounding using multiple logistic regression and checked for statistical significance, p < 0.05. The Manuscript was written and reported using the SQUIRE 2.0 checklist guidelines .
This study first received approval from the graduate studies academic board of the School of Public Health, University for Development Studies, Tamale. The study further received ethical clearance from the Kwame Nkrumah University of Science and Technology Ethical Review Committee and referenced CHRPE/PA/142/22. Additionally, a formal letter of request for admittance was sent to the health directorate of the study site for permission to conduct the study. All respondents consented to participate in this study by responding to the Google questionnaire asking for their consent before gaining access to respond to the rest of the questionnaire. Respondents who did not consent were denied access to continue to respond to the questionnaire. No witnesses were required considering all respondents were above 18 years of age and considered adults as per the 1992 Constitution of the Republic of Ghana. Respondents retained their right to withdraw from the study at any time without prior notice to the researchers. The study design concealed respondents’ identities, and their opinions and values maintained.
Respondents’ sociodemographic characteristics and COVID-19 vaccine uptake In all, 260 out of 262 respondents returned their questionnaire showing a 98.8% response rate. Out of the total respondents (n = 260), 219 (84.2%) took at least one shot of a COVID-19 vaccine ( ). All respondents were in active service, i.e., below 60 years in the public sector in Ghana. Nine out of ten respondents were less than 40 years of age. Out of the total respondents, 151 (58.1%) were females, and 109 (41.9%) were males. A little over half of the respondents were married (51.2%), and nearly 90% had tertiary education. Respondents were mainly nursing care professionals, 169 (61.5%). Of all the respondents, 23 (8.8%) reported having a known chronic illness, while one out of ten said they have tested positive for COVID-19 infection since the pandemic outbreak in Ghana in March 2020. COVID-19 vaccination experience Of those who took the COVID-19 vaccine, 61.9% took AstraZeneca (oxford brand) vaccine, followed by Johnson and Johnson (8.2%). Sputnik V was the least vaccine taken among the study participants (3.5%) ( ). Most respondents reported side effects ranging from severe headache (54.8%), generalised malaise (45.8%), injection site swelling (23.1%) and fever (22.7%) ( ). Out of those who experienced side effects, the majority considered theirs as mild, 103 (47.0%), followed by very severe 46(21.0%), moderate, 43 (19.6%), and then severe, 27 (12.3%) ( ). Perceptions of COVID-19 infection and vaccine safety among healthcare workers Overall, 85.8% of respondents agreed that they are at risk of COVID-19 infection due to occupational exposure. Almost three-quarters (73%) believe that they can protect themselves from COVID-19 and do not require the vaccine. More than half (64.6%) agree that their families, patients, and friends will be protected if they took a vaccine. Two-thirds (60%) of the total respondents have confidence in the measures Ghana’s Ministry of Health put in place to control the pandemic and have trust in the COVID-19 vaccine recommendation by the WHO and the Ministry of Health. On vaccine safety, 60% agree that the vaccine is safe, and 50.4% agree the vaccine is effective. Nine out of ten participants, (88.1%) do not believe that COVID-19 vaccine had a hidden agenda. However, more than 2/3 of respondents were concerned about the likely side effects of the COVID-19 vaccine. Nonetheless, most of the study participants (85.4%) would recommend the COVID-19 vaccine to eligible individuals. Most of the study participants disagree with the notion that vaccines developed in Europe and America are safer than those developed in other countries and disagree that vaccines deployed in Africa and Ghana are less effective and less safe, with 43.5% and 48.8%, respectively ( ). Association between perceived factors and COVID-19 vaccine uptake among healthcare workers Comorbidity and cadre of healthcare worker were found to be statistically significant in association with COVID-19 vaccine uptake among the healthcare workers; p = 0.001 and p = 0.030, respectively ( supplementary sheet 2 ). Perception of COVID-19 vaccine: aOR = 0.048, 95% CI (0.715, 2.763); p = 0.006, previous vaccine uptake: aOR = 0.048, 95% CI (0.715, 2.763); p = 0.006, perceived vaccine safety: aOR = 0.126, 95% CI (0.027,0.373); p = 0.001, perceived seriousness of COVID-19 infection: aOR = 0.077, 95% CI (1.75,2.934); p = 0.008, and trust in experts’ recommendation aOR: = 0.048, 95% CI (1.250,7.704); p = 0.015 were statistically significant in associated with COVID-19 vaccine uptake among the healthcare workers. However, the association between COVID-19 vaccine uptake and COVID-19 infection status, perceived vaccine effectiveness, vaccine country of origin, and perceived difference of Africa-allocated vaccines were statistically not significant ( ).
In all, 260 out of 262 respondents returned their questionnaire showing a 98.8% response rate. Out of the total respondents (n = 260), 219 (84.2%) took at least one shot of a COVID-19 vaccine ( ). All respondents were in active service, i.e., below 60 years in the public sector in Ghana. Nine out of ten respondents were less than 40 years of age. Out of the total respondents, 151 (58.1%) were females, and 109 (41.9%) were males. A little over half of the respondents were married (51.2%), and nearly 90% had tertiary education. Respondents were mainly nursing care professionals, 169 (61.5%). Of all the respondents, 23 (8.8%) reported having a known chronic illness, while one out of ten said they have tested positive for COVID-19 infection since the pandemic outbreak in Ghana in March 2020.
Of those who took the COVID-19 vaccine, 61.9% took AstraZeneca (oxford brand) vaccine, followed by Johnson and Johnson (8.2%). Sputnik V was the least vaccine taken among the study participants (3.5%) ( ). Most respondents reported side effects ranging from severe headache (54.8%), generalised malaise (45.8%), injection site swelling (23.1%) and fever (22.7%) ( ). Out of those who experienced side effects, the majority considered theirs as mild, 103 (47.0%), followed by very severe 46(21.0%), moderate, 43 (19.6%), and then severe, 27 (12.3%) ( ).
Overall, 85.8% of respondents agreed that they are at risk of COVID-19 infection due to occupational exposure. Almost three-quarters (73%) believe that they can protect themselves from COVID-19 and do not require the vaccine. More than half (64.6%) agree that their families, patients, and friends will be protected if they took a vaccine. Two-thirds (60%) of the total respondents have confidence in the measures Ghana’s Ministry of Health put in place to control the pandemic and have trust in the COVID-19 vaccine recommendation by the WHO and the Ministry of Health. On vaccine safety, 60% agree that the vaccine is safe, and 50.4% agree the vaccine is effective. Nine out of ten participants, (88.1%) do not believe that COVID-19 vaccine had a hidden agenda. However, more than 2/3 of respondents were concerned about the likely side effects of the COVID-19 vaccine. Nonetheless, most of the study participants (85.4%) would recommend the COVID-19 vaccine to eligible individuals. Most of the study participants disagree with the notion that vaccines developed in Europe and America are safer than those developed in other countries and disagree that vaccines deployed in Africa and Ghana are less effective and less safe, with 43.5% and 48.8%, respectively ( ).
Comorbidity and cadre of healthcare worker were found to be statistically significant in association with COVID-19 vaccine uptake among the healthcare workers; p = 0.001 and p = 0.030, respectively ( supplementary sheet 2 ). Perception of COVID-19 vaccine: aOR = 0.048, 95% CI (0.715, 2.763); p = 0.006, previous vaccine uptake: aOR = 0.048, 95% CI (0.715, 2.763); p = 0.006, perceived vaccine safety: aOR = 0.126, 95% CI (0.027,0.373); p = 0.001, perceived seriousness of COVID-19 infection: aOR = 0.077, 95% CI (1.75,2.934); p = 0.008, and trust in experts’ recommendation aOR: = 0.048, 95% CI (1.250,7.704); p = 0.015 were statistically significant in associated with COVID-19 vaccine uptake among the healthcare workers. However, the association between COVID-19 vaccine uptake and COVID-19 infection status, perceived vaccine effectiveness, vaccine country of origin, and perceived difference of Africa-allocated vaccines were statistically not significant ( ).
Nurses and midwives were the major respondents in this study, constituting nearly two-third of the study participants. Although the sampling may have skewed unfavourably across all cadres in the study area, it also presents some leverage to interpret the results towards essential healthcare cadres, given the significant role of nursing staff during the pandemic peak period. Respondents were nearly all Christians, 95.8%, and this spoke to the nagging perceptions of myths and beliefs about COVID-19 and vaccine uptake. Less than 10 percent tested positive for COVID-19, more than one year since the COVID-19 was reported in Ghana and when the first Pfizer-BionTech COVID-19 Vaccine was released. The results show that vaccine uptake was high among the healthcare workers in the study area (84.2%), far more than the expected target of 70%. Oxford AstraZeneca brand was the leading brand taken by healthcare workers, and this is consistent with earlier evidence in Ghana, Greece, the United States and Hong Kong . This was also not surprising as AstraZeneca was the first COVID-19 vaccine to be distributed by the COVID vaccine global access (COVAX) . AstraZeneca was consciously distributed given its availability and recommendation from world bodies and probably explained its popularity among the vaccinated brands. Given an option, healthcare workers probably would go in for other brands, other than AstraZeneca. Our findings also show an improvement in vaccine uptake among the study population relative to earlier studies, which found 38.3% , showing a glimpse of hope of vaccine acceptability albeit on a gradual scale. It also means the general population can sooner rely on the examples of the health workers and take up the vaccine to boost the country’s chances of herd immunity. Population COVID-19 vaccine uptake adherence is significant in determining the overall prospect of achieving herd immunity. Further, the vaccination uptake rate among healthcare workers is relevant in improving the odds of population vaccine acceptability as the healthcare workers show willingness to take up the vaccine themselves. The current findings also show a substantial improvement in vaccine uptake over time, in contrast with a web-based cross-sectional study conducted in Nigeria, which showed that only 20% of the participants were willing to participate in the COVID-19 vaccination exercise . The disagreements in the findings may be due to differences in the study period, particularly as at the early stages of vaccine development, the prospects of the COVID-19 vaccine were uncertain. It is also safe to argue that the willingness to take up a vaccine may differ from actual vaccine uptake. Moreover, most of the factors identified in previous studies might have been addressed or modified and perhaps influenced the outcome of the current results, hence an improvement over time relative to vaccine uptake. Nevertheless, it is important to note that even though this study did not report exactly the number of study participants who were vaccine hesitant, one out of ten of the study participants disagreed that COVID-19 vaccine was safe, while 21 percent disagreed that COVID-19 vaccine is effective, yet a whopping 40 percent of the study participants had doubts of COVID-19 vaccine safety despite the WHO recommendation of it use. Eight out of ten study participants who took the vaccine reported side effects, of which more than half claimed they had moderate to severe side-effects. Our findings point to severe headaches as the one side effect reported by more than half of the respondents who took the COVID-19 vaccine. Although some considered their experiences as mild, the presence of side effects appears to mitigate the odds of achieving an optimal level of inoculation against the novel coronavirus, and this is consistent with earlier studies that suggest that Ghanaian healthcare professionals showed mild vaccine side effects that resolved with or without interventions . Structured education should follow vaccination in the future, giving the chances of side effects. The subject of mistrust of vaccine versus expert recommendation and its implication cannot be overemphasized. Healthcare workers had no full trust in the COVID-19 vaccine safety in spite of experts’ recommendations and this probably explains the declining figures at the level of the second dosing. In our analysis, a little over half of the study participants took a second dose. For a full protection, most COVID-19 vaccine require at least two doses jab to be effective, and what this suggest is that, unless efforts are made to dose population with a single dose, herd immunity may be difficult to attain despite higher figures of first doses of vaccination. Our study also observed self-mistrust and uncertainty surrounding vaccine safety and effectiveness. This finding resonates with previous vaccine safety and effectiveness studies as the one reason for vaccine hesitancy [ , , ]. We found that perception of COVID-19 vaccine safety, perceived disease seriousness, lack of trust in expert views and previous vaccination experience influenced the odds of taking the COVID-19 vaccine, and these were statistically significant, p = 0.001, p = 0.008, p = 0.015, p = 0.001, respectively. The findings are similar to an earlier study in Iraq and in the U.S. that found that perceived risk of infection was a significant predictor for vaccination acceptance. Nevertheless, the findings contrast with earlier views, canvassing expert recommendations as a catalyst to whip up vaccine uptake . Known comorbidity and professional category negatively influenced the uptake of COVID-19 vaccines, which were statistically significant.
The study design could not include data from a more expansive setting of Ghanaian healthcare workers, which may have skewed the results, reflecting only the views of health workers in the middle belt of Ghana. Also, study participants religion was nine out of ten Christians, and this perhaps skewed the perceptions and beliefs expressed in the findings thereof. However, the use of probability sampling somewhat offsets the weaknesses of the study design. Using one-time data with no validated survey data of national status means the findings could not account for subsequent knowledge that may have been influenced by on-going public education, including healthcare workers. Nevertheless, the findings of this study add to the literature on COVID-19 vaccine perception and uptake from a deprived setting.
Perceived non-severity of COVID-19 infection, vaccine safety concerns, severity of vaccine side effects, and the lack of trust in expert recommendations affects the uptake of the COVID-19 vaccine among healthcare workers. It is recommended that the Ministry of Health and Ghana Health Service provide tailor-made education on COVID-19 vaccination to deal with lingering myths and misconceptions even among healthcare workers to improve COVID -19 vaccine uptake as a sure means of achieving herd immunity against the resurgence of the coronavirus disease.
S1 Table Chi-square test of associations between sociodemographic characteristics and health worker vaccine uptake (DOCX)
|
Interactions between rootstocks and compost influence the active rhizosphere bacterial communities in citrus | 51a3a3ae-b8e3-45d1-b01b-ff66ed186a4d | 10116748 | Microbiology[mh] | The rhizosphere is the region around the root characterized by high concentrations of plant-derived organic exudates that serve as signal molecules and nutrient sources for microbial recruitment . The microbial communities of the rhizosphere, which constitute the “rhizobiome,” are essential for plant health as they can increase plant nutrient uptake and resistance to several biotic and abiotic stresses through mechanisms including induced systemic resistance, suppression of plant pathogens, and solubilization of soil minerals [ – ]. Most fruit tree crops are composed of two parts: the aboveground fruit-bearing part, the scion, and the belowground part, the rootstock, which provides anchorage and is responsible for water and nutrient uptake. The scion and rootstock, which are often genetically different, are joined through the process of grafting . New rootstocks are developed to adapt to soilborne stresses and diseases and to modulate the horticultural characteristics of the scion. The history of rootstock use and breeding in modern citrus production has been shaped by diseases such as Phytophthora root rot, Citrus tristeza virus , and more recently huanglongbing (HLB, a.k.a. citrus greening) . The rootstock genotype cannot only modulate horticultural traits such as tree size and productivity but can also influence the composition of the rhizosphere microbial communities . The genotype influence on the rhizobiome can even extend to within-species differences as demonstrated in grapes [ – ], apples [ – ], tomatoes , and Populus sp. . Root health is a critical factor for tree growth as it directly influences a tree’s ability to cope with adverse biotic and abiotic stressors. Despite the importance of the rhizobiome for plant nutrient availability , few studies have examined the direct link between the rootstock genotype-based recruitment of rhizosphere bacterial communities and the availability of root nutrients for plant uptake. The potential impacts of plant genotype on the rhizobiome composition and nutrient availability are particularly relevant because they suggest the potential for agricultural production systems to maximize benefits from rhizobiomes indirectly through the choice of rootstocks. Just as rootstocks are bred to resist specific soilborne diseases, plant genotypes with desired phenotypes can be used as a microbiome engineering tool to select candidate taxa (e.g., to serve as biofertilizers or biocontrol agents) for agricultural microbiome engineering [ – ]. In addition, the study of the host genes associated with the selection of microbial communities can be used to support microbiome-focused crop breeding . Citrus is a globally important perennial fruit crop, but its production faces challenges, particularly from the devastating disease HLB [ – ]. Several strategies, including the use of selected rootstock genotypes , ground application of specific nutrients , and soil amendments (e.g., compost and plant biostimulants such as humic substances, seaweed extracts, and microbial inoculants) , have been proposed to improve root health and crop production in citrus. In addition, there is increased interest to understand the composition and function of the citrus microbiome to help optimize and maximize future agricultural microbiome engineering solutions [ – ]. In citrus, rootstock selection is essential for the success or failure of a citrus operation , and the benefits of using specially selected rootstocks has been documented in numerous publications [ – ]. Recent studies have also shown that the root metabolic composition may differ among citrus rootstocks [ – ]. This raises the question of whether different citrus rootstocks may recruit distinct rhizosphere bacterial communities that could impact root nutrient cycling. Florida is one of the largest citrus producers in the USA with more than 60 million trees on 143,000 harvested ha . Most citrus in Florida is grown on naturally infertile soils that have little organic matter and are unable to retain more than a minimal amount of soluble nutrients , directly affecting the establishment of trees during the early phase when rapid development of the tree canopy is critical. This situation is exacerbated when trees become infected with HLB and fibrous roots start to decline . Increasing soil carbon availability through the application of compost can provide a wide range of benefits for root health and production, including improving nutrient and water retention and nutrient availability . Application of compost can also impact the soil microbiome and increase microbial diversity , which has been linked to reduced disease incidence . A recent study showed that compost application increased the bacterial diversity in the apple rhizosphere of two rootstocks, and that interactions between compost and rootstocks controlled variations in the rhizobiome composition that may determine increases in tree biomass . However, the interaction between compost and rootstocks in the citrus rhizobiome has not been explored, nor has the relationship between rhizobiome taxa and root nutrient concentrations. A recent work showed that predicted bacterial functions in the rhizobiome of grapes were similar among different rootstocks . This suggests that the potential functions of bacterial rhizobiomes recruited by different rootstocks of the same crop may be redundant and evenly spread. Whether this is the case for other crops such as citrus remains to be determined, as well as how the application of compost may impact microbial functions in the citrus rhizobiome and root nutrient availability. To date, the study of rootstock effects on the rhizobiome of crops has been predominately performed using a DNA-based amplicon sequencing approach. However, RNA-based estimates can be more accurate for soil microbiome studies [ – ] since relic DNA is abundant in soil and obscures estimates of soil microbial diversity. In addition, highly active microbial taxa may be rare or even absent from DNA-based approaches for the study of soil microbial communities [ – ]. Therefore, we used extracted 16S rRNA from the citrus rhizosphere to: (1) examine the effect of different citrus rootstocks and/or compost on the abundance, diversity, composition, and predicted functionality of active rhizosphere bacterial communities, and (2) determine the relationships between active rhizosphere bacterial communities and root nutrient concentrations and identify potential bacterial taxa correlated with changes in root nutrients. We hypothesized that the rootstock genotype determines variations in diversity and composition of the rhizobiome, and that the rhizobiome bacterial community is richer and more diverse in soils treated with compost compared to the control, resulting in greater root nutrient concentrations.
Study site, experimental design, and management The field study was carried out in a commercial citrus orchard in Southwest Florida (Hendry County, FL, USA) under HLB-endemic conditions . The soil at the study site is a sandy spodosol according to the soil taxonomy of USDA , consisting of a surface layer, which is low in organic matter (< 1.5%) and soil N content [< 10 mg/kg of ammonium (NH 4 + ) + nitrate (NO 3 − )], and a subsurface layer with poor drainage . Trees were planted in August 2019 in double rows on raised beds separated by furrows at a spacing of 3.7 m within rows and 7.6 m between rows (358 trees/ha). General management of the orchard followed practices determined by the orchard operator and included seepage irrigation, insecticide, herbicide and fertilizer applications, and other standard management practices. Trees consisted of ‘Valencia’ sweet orange scion ( Citrus sinensis ) on four different rootstocks: (i) X-639 ( C. reticulata ‘Cleopatra’ × Poncirus trifoliata ‘Rubidoux’); (ii) US-802 ( C. maxima ‘Siamese’ × P. trifoliata ‘Gotha Road’); (iii) US-812 ( C. reticulata ‘Sunki’ × P. trifoliata ‘Benecke’); and (iv) US-897 ( C. reticulata ‘Cleopatra’ × P. trifoliata ‘Flying Dragon’). Two treatments were assayed: compost and no compost (control). The field experiment was a randomized split-plot design with treatment (compost or control) as the main plot and rootstock (X-639, US-802, US-812, or US-897) as the subplot (Supplementary Fig. S ). Plots were arranged in eight blocks (16 beds) across a 9-ha experimental site with each block containing two beds either treated with compost or untreated (control). Each bed contained 200 experimental trees, 100 per row, arranged in sets of 50 trees on each of the four rootstocks (Supplementary Fig. S ). Subplots consisted of one bed containing compost and one bed without compost. There were 64 experimental units in total (8 blocks × 2 treatments × 4 rootstocks). Two months after planting (November), compost was applied at a rate of 12.4 tons/ha and incorporated in beds by a shallow till; the other half of the beds did not receive any compost. Following this initial application, compost was applied every 6 months at the same rate (12.4 tons/ha) by broadcast spreading. The locally sourced compost (Kastco Agriculture Service, Naples, FL, USA) was made from yard waste. The physicochemical characteristics of the compost were as follows: C:N ratio, 24.9; organic matter, 23.6%; pH in water, 7.7; total solids, 51.14%; conductivity, 3.1 mS/cm; phosphorus (P), 0.08%; potassium (K), 0.26%; sulfur (S), 0.09%; calcium (Ca), 3.28%; magnesium (Mg), 0.31%; iron (Fe), 2500 ppm; manganese (Mn), 67.5 ppm; and boron (B), 100 ppm. Rhizosphere sample collection Fibrous roots (≤ 1 mm in diameter) with soil attached were collected in August 2021, two years after planting and after 4 consecutive compost applications, from eight trees from each experimental unit under the canopy, and pooled. Roots were separated in the field and used for the following: (1) root nutrient analysis (about 50 g of roots) and (2) isolation of rhizosphere soil and subsequent RNA extraction (about 10 g of roots). Fibrous roots for microbial analyses were placed in 50-mL sterile centrifuge tubes, immediately flash frozen in liquid nitrogen, and stored at −80° until analysis. Rhizosphere soil for RNA extraction was isolated using sterile phosphate-buffered saline (PBS) solution as described previously . Root nutrient analysis Root samples for quantification of macro (N, P, K, Mg, Ca, and S) and micronutrients (B, Zn, Mn, Fe, and Cu) were sent to a commercial laboratory (Waters Agricultural Laboratories Inc., Camilla, GA, USA) and analyzed using inductively coupled plasma (ICP) emission spectroscopy . RNA extraction and reverse transcription of RNA to cDNA RNA from 1 g of rhizosphere soil was extracted using the RNA PowerSoil ® Total RNA Isolation kit (Qiagen, USA) according to manufacturer’s instructions. The RNA obtained was quantified using the Qubit ™ RNA High Sensitivity assay kit (Thermo Scientific, USA), treated with DNase I (RNase free) (Qiagen, USA) to remove co-extracted DNA following the manufacturer’s directions, and kept at −80 °C until analysis. The High-Capacity cDNA Reverse Transcription Kit was used for reverse transcription reactions with RNase inhibitor (Thermo Scientific, USA), following the manufacturer’s instructions, and using 150–200 ng RNA in a final volume of 20 μL. Synthesis of cDNA was achieved with the use of random primers. The concentration of cDNA was measured using the Qubit™ DNA High Sensitivity assay kit (Thermo Scientific, USA) and kept at −80 °C until analysis. qPCR assays The total abundance of active bacterial communities was determined by quantitative PCR (qPCR) using the 16S rRNA gene as a molecular marker and cDNA as a template. Quantitative amplifications were performed following the procedures, primers, and thermal conditions previously described by Castellano-Hinojosa et al. and using a QuantStudio 3 Real-Time PCR system (ThermoFisher, USA). Calibration curves had a correlation coefficient r 2 > 0.99 in all assays. The efficiency of PCR amplification was between 90 and 100%. Library preparation and sequencing analysis The extracted cDNA was sent for sequencing at the DNA Services Facility at the University of Illinois, Chicago, IL, USA. The V4 region of the bacterial 16S rRNA gene was amplified using the 515Fa and 926R primers following the Earth Microbiome Project protocol . Raw reads were analyzed using QIIME2 v2018.4 following the procedures described in full detail in Castellano-Hinojosa and Strauss . Briefly, bacterial rRNA gene sequence reads were assembled and dereplicated using DADA2 with the paired-end setting into representative amplicon sequence variants (ASVs). ASVs were assigned to the SILVA 132 database using the naïve Bayes classifier in QIIME2 . After quality filtering, denoising, and chimera removal, 4743365 16S rRNA sequences (mean of 74115 per sample) were obtained from the total of 64 samples. Rarefaction curves reached saturation for all samples, indicating sequencing depth was sufficient (data not shown). Raw sequence data were deposited in NCBI’s Sequence Read Archive under BioProject PRJNA837574. Analysis of the diversity and composition of active rhizosphere bacterial communities Alpha (Shannon and Inverse Simpson) and beta-diversity analyses were performed on log-normalized data to avoid rarefaction errors using the R package “phyloseq” v1.24.0 . Beta-diversity analysis included a nonmetric multidimensional scaling (NMDS) on Bray-Curtis distance. Differences in community composition between rootstocks, treatments, and their interaction were tested by permutational analysis of variance (PERMANOVA). The nonparametric analysis ANOSIM based on the relative abundance of the bacterial ASVs was used to examine similarities between rootstocks for each treatment. R values close to 1 indicate dissimilarity between treatments. Differentially abundant bacterial taxa between treatments at the phylum and genus taxonomic levels were detected using the DESeq2 package . p -values ≤ 0.05 were considered significant. Functional characteristics of active rhizosphere bacterial communities PICRUSt2 was used to predict the functional capabilities at the category and pathway levels of active rhizosphere bacterial communities based on 16S rRNA gene amplicon data as described by Douglas et al. . Significant differences in functional characteristics between groups of samples were studied using the Welch’s t -test, followed by Benjamini–Hochberg-FDR as a multiple test correction . Quantification of a root multinutrient cycling index Belowground soil biodiversity has a key role in determining ecosystem functioning . Because bacterial communities perform multiple simultaneous functions (multifunctionality), rather than a single measurable process, we constructed a root multinutrient cycling index (MNC) analogous to the widely used multifunctionality index [ – ] using the root nutrients N, P, K, Mg, Ca, S, B, Zn, Mn, Fe, and Cu. These nutrients deliver some of the fundamental supporting and regulating ecosystem services [ – ] and are essential for crop growth, particularly for citrus trees in HLB-endemic conditions . For example, two of the most limiting nutrients for primary production in terrestrial ecosystems are N and P . Potassium, the third essential macronutrient for plants, is involved in numerous biological processes that contribute to crop growth, including protein synthesis, enzyme activation, and photosynthesis . Calcium (Ca) plays a role in cell division and elongation . Magnesium is essential for chlorophyll and an important cofactor of several enzymes . Sulfur acts as a signaling molecule in stress management as well as normal metabolic processes . Micronutrients such as B, Zn, Mn, Fe, and Cu are essential to achieve high plant productivity . Each of the eleven root nutrients were normalized (log-transformed) and standardized using the Z-score transformation. To derive a quantitative MNC value for each treatment and rootstock, we averaged the standardized scores of all individual nutrient variables . The MNC index provides a straightforward and interpretable measure of the ability of bacterial communities to sustain multiple functions simultaneously [ – ]. It measures all functions on a common scale of standard deviation units, has good statistical properties, and shows good correlating with previously established indices that quantify multifunctionality . Pearson’s correlation analysis was used to estimate the relationship between bacterial abundance, alpha- and beta-diversity, and MNC using the cor.test function in R. Identification of the active taxonomic and predicted functional core rhizobiome We studied whether the application of compost impacts the active taxonomic and predicted functional core rhizobiome of citrus. ASVs (at the genus level) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways present in at least 75% of the samples were identified as the taxonomic and predicted functional core rhizobiome, respectively, in the control and treated soils . Statistical analyses All statistical analyses were conducted in the R environment (v3.5.1; http://www.r-project.org/ ). Means of bacterial abundance, alpha diversity, and root nutrients were compared via linear mixed-effects (LME) models, with rootstock (X-639, US-802, US-812, or US-897) and treatment (compost or control) considered random factors and dependent variables, respectively, by using the function “lme” in the “nlme” package. Significant effects were determined by analysis of variance (ANOVA) ( p ≤ 0.05). A Tukey’s post hoc test was calculated by using the function “lsmeans.” We used a multiple regression model with variance decomposition analysis to evaluate the relative importance of the differentially abundant taxa between treatments for explaining variations in root nutrients using the R package “relaimpo” . Structural equation modelling (SEM) was used to evaluate the relationships between rootstock, compost, MNC, bacterial abundance, alpha- and beta-diversity, and predicted functionality. The a priori model is shown in Supplementary Fig. S . Path coefficients of the model and their associated p -values were calculated . We used bootstrap to test the probability that a path coefficient differs from zero since some of the variables introduced were not normally distributed . When these data manipulations were completed, we parameterized our model using our data set and tested its overall goodness of fit. We used the χ 2 test ( χ 2 ; the model has a good fit when χ 2 is ≤ 2 and P is ≥ 0.05) and the root mean square error (MSE) of approximation (RMSEA; the model has a good fit when RMSEA is ∼≤ 0.05 and P is ∼≥ 0.05) . All SEM analyses were conducted using AMOS 20.0 (AMOS IBM, USA). Significant differences in the relative abundance of ASVs and pathways between taxonomic and predicted functional core rhizobiomes in the control vs. treated soils were calculated using the Welch’s t -test and the Benjamini–Hochberg False Discovery Rate (FDR) multiple-test correction using the R package “sgof.”
The field study was carried out in a commercial citrus orchard in Southwest Florida (Hendry County, FL, USA) under HLB-endemic conditions . The soil at the study site is a sandy spodosol according to the soil taxonomy of USDA , consisting of a surface layer, which is low in organic matter (< 1.5%) and soil N content [< 10 mg/kg of ammonium (NH 4 + ) + nitrate (NO 3 − )], and a subsurface layer with poor drainage . Trees were planted in August 2019 in double rows on raised beds separated by furrows at a spacing of 3.7 m within rows and 7.6 m between rows (358 trees/ha). General management of the orchard followed practices determined by the orchard operator and included seepage irrigation, insecticide, herbicide and fertilizer applications, and other standard management practices. Trees consisted of ‘Valencia’ sweet orange scion ( Citrus sinensis ) on four different rootstocks: (i) X-639 ( C. reticulata ‘Cleopatra’ × Poncirus trifoliata ‘Rubidoux’); (ii) US-802 ( C. maxima ‘Siamese’ × P. trifoliata ‘Gotha Road’); (iii) US-812 ( C. reticulata ‘Sunki’ × P. trifoliata ‘Benecke’); and (iv) US-897 ( C. reticulata ‘Cleopatra’ × P. trifoliata ‘Flying Dragon’). Two treatments were assayed: compost and no compost (control). The field experiment was a randomized split-plot design with treatment (compost or control) as the main plot and rootstock (X-639, US-802, US-812, or US-897) as the subplot (Supplementary Fig. S ). Plots were arranged in eight blocks (16 beds) across a 9-ha experimental site with each block containing two beds either treated with compost or untreated (control). Each bed contained 200 experimental trees, 100 per row, arranged in sets of 50 trees on each of the four rootstocks (Supplementary Fig. S ). Subplots consisted of one bed containing compost and one bed without compost. There were 64 experimental units in total (8 blocks × 2 treatments × 4 rootstocks). Two months after planting (November), compost was applied at a rate of 12.4 tons/ha and incorporated in beds by a shallow till; the other half of the beds did not receive any compost. Following this initial application, compost was applied every 6 months at the same rate (12.4 tons/ha) by broadcast spreading. The locally sourced compost (Kastco Agriculture Service, Naples, FL, USA) was made from yard waste. The physicochemical characteristics of the compost were as follows: C:N ratio, 24.9; organic matter, 23.6%; pH in water, 7.7; total solids, 51.14%; conductivity, 3.1 mS/cm; phosphorus (P), 0.08%; potassium (K), 0.26%; sulfur (S), 0.09%; calcium (Ca), 3.28%; magnesium (Mg), 0.31%; iron (Fe), 2500 ppm; manganese (Mn), 67.5 ppm; and boron (B), 100 ppm.
Fibrous roots (≤ 1 mm in diameter) with soil attached were collected in August 2021, two years after planting and after 4 consecutive compost applications, from eight trees from each experimental unit under the canopy, and pooled. Roots were separated in the field and used for the following: (1) root nutrient analysis (about 50 g of roots) and (2) isolation of rhizosphere soil and subsequent RNA extraction (about 10 g of roots). Fibrous roots for microbial analyses were placed in 50-mL sterile centrifuge tubes, immediately flash frozen in liquid nitrogen, and stored at −80° until analysis. Rhizosphere soil for RNA extraction was isolated using sterile phosphate-buffered saline (PBS) solution as described previously .
Root samples for quantification of macro (N, P, K, Mg, Ca, and S) and micronutrients (B, Zn, Mn, Fe, and Cu) were sent to a commercial laboratory (Waters Agricultural Laboratories Inc., Camilla, GA, USA) and analyzed using inductively coupled plasma (ICP) emission spectroscopy .
RNA from 1 g of rhizosphere soil was extracted using the RNA PowerSoil ® Total RNA Isolation kit (Qiagen, USA) according to manufacturer’s instructions. The RNA obtained was quantified using the Qubit ™ RNA High Sensitivity assay kit (Thermo Scientific, USA), treated with DNase I (RNase free) (Qiagen, USA) to remove co-extracted DNA following the manufacturer’s directions, and kept at −80 °C until analysis. The High-Capacity cDNA Reverse Transcription Kit was used for reverse transcription reactions with RNase inhibitor (Thermo Scientific, USA), following the manufacturer’s instructions, and using 150–200 ng RNA in a final volume of 20 μL. Synthesis of cDNA was achieved with the use of random primers. The concentration of cDNA was measured using the Qubit™ DNA High Sensitivity assay kit (Thermo Scientific, USA) and kept at −80 °C until analysis.
The total abundance of active bacterial communities was determined by quantitative PCR (qPCR) using the 16S rRNA gene as a molecular marker and cDNA as a template. Quantitative amplifications were performed following the procedures, primers, and thermal conditions previously described by Castellano-Hinojosa et al. and using a QuantStudio 3 Real-Time PCR system (ThermoFisher, USA). Calibration curves had a correlation coefficient r 2 > 0.99 in all assays. The efficiency of PCR amplification was between 90 and 100%.
The extracted cDNA was sent for sequencing at the DNA Services Facility at the University of Illinois, Chicago, IL, USA. The V4 region of the bacterial 16S rRNA gene was amplified using the 515Fa and 926R primers following the Earth Microbiome Project protocol . Raw reads were analyzed using QIIME2 v2018.4 following the procedures described in full detail in Castellano-Hinojosa and Strauss . Briefly, bacterial rRNA gene sequence reads were assembled and dereplicated using DADA2 with the paired-end setting into representative amplicon sequence variants (ASVs). ASVs were assigned to the SILVA 132 database using the naïve Bayes classifier in QIIME2 . After quality filtering, denoising, and chimera removal, 4743365 16S rRNA sequences (mean of 74115 per sample) were obtained from the total of 64 samples. Rarefaction curves reached saturation for all samples, indicating sequencing depth was sufficient (data not shown). Raw sequence data were deposited in NCBI’s Sequence Read Archive under BioProject PRJNA837574.
Alpha (Shannon and Inverse Simpson) and beta-diversity analyses were performed on log-normalized data to avoid rarefaction errors using the R package “phyloseq” v1.24.0 . Beta-diversity analysis included a nonmetric multidimensional scaling (NMDS) on Bray-Curtis distance. Differences in community composition between rootstocks, treatments, and their interaction were tested by permutational analysis of variance (PERMANOVA). The nonparametric analysis ANOSIM based on the relative abundance of the bacterial ASVs was used to examine similarities between rootstocks for each treatment. R values close to 1 indicate dissimilarity between treatments. Differentially abundant bacterial taxa between treatments at the phylum and genus taxonomic levels were detected using the DESeq2 package . p -values ≤ 0.05 were considered significant.
PICRUSt2 was used to predict the functional capabilities at the category and pathway levels of active rhizosphere bacterial communities based on 16S rRNA gene amplicon data as described by Douglas et al. . Significant differences in functional characteristics between groups of samples were studied using the Welch’s t -test, followed by Benjamini–Hochberg-FDR as a multiple test correction .
Belowground soil biodiversity has a key role in determining ecosystem functioning . Because bacterial communities perform multiple simultaneous functions (multifunctionality), rather than a single measurable process, we constructed a root multinutrient cycling index (MNC) analogous to the widely used multifunctionality index [ – ] using the root nutrients N, P, K, Mg, Ca, S, B, Zn, Mn, Fe, and Cu. These nutrients deliver some of the fundamental supporting and regulating ecosystem services [ – ] and are essential for crop growth, particularly for citrus trees in HLB-endemic conditions . For example, two of the most limiting nutrients for primary production in terrestrial ecosystems are N and P . Potassium, the third essential macronutrient for plants, is involved in numerous biological processes that contribute to crop growth, including protein synthesis, enzyme activation, and photosynthesis . Calcium (Ca) plays a role in cell division and elongation . Magnesium is essential for chlorophyll and an important cofactor of several enzymes . Sulfur acts as a signaling molecule in stress management as well as normal metabolic processes . Micronutrients such as B, Zn, Mn, Fe, and Cu are essential to achieve high plant productivity . Each of the eleven root nutrients were normalized (log-transformed) and standardized using the Z-score transformation. To derive a quantitative MNC value for each treatment and rootstock, we averaged the standardized scores of all individual nutrient variables . The MNC index provides a straightforward and interpretable measure of the ability of bacterial communities to sustain multiple functions simultaneously [ – ]. It measures all functions on a common scale of standard deviation units, has good statistical properties, and shows good correlating with previously established indices that quantify multifunctionality . Pearson’s correlation analysis was used to estimate the relationship between bacterial abundance, alpha- and beta-diversity, and MNC using the cor.test function in R.
We studied whether the application of compost impacts the active taxonomic and predicted functional core rhizobiome of citrus. ASVs (at the genus level) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways present in at least 75% of the samples were identified as the taxonomic and predicted functional core rhizobiome, respectively, in the control and treated soils .
All statistical analyses were conducted in the R environment (v3.5.1; http://www.r-project.org/ ). Means of bacterial abundance, alpha diversity, and root nutrients were compared via linear mixed-effects (LME) models, with rootstock (X-639, US-802, US-812, or US-897) and treatment (compost or control) considered random factors and dependent variables, respectively, by using the function “lme” in the “nlme” package. Significant effects were determined by analysis of variance (ANOVA) ( p ≤ 0.05). A Tukey’s post hoc test was calculated by using the function “lsmeans.” We used a multiple regression model with variance decomposition analysis to evaluate the relative importance of the differentially abundant taxa between treatments for explaining variations in root nutrients using the R package “relaimpo” . Structural equation modelling (SEM) was used to evaluate the relationships between rootstock, compost, MNC, bacterial abundance, alpha- and beta-diversity, and predicted functionality. The a priori model is shown in Supplementary Fig. S . Path coefficients of the model and their associated p -values were calculated . We used bootstrap to test the probability that a path coefficient differs from zero since some of the variables introduced were not normally distributed . When these data manipulations were completed, we parameterized our model using our data set and tested its overall goodness of fit. We used the χ 2 test ( χ 2 ; the model has a good fit when χ 2 is ≤ 2 and P is ≥ 0.05) and the root mean square error (MSE) of approximation (RMSEA; the model has a good fit when RMSEA is ∼≤ 0.05 and P is ∼≥ 0.05) . All SEM analyses were conducted using AMOS 20.0 (AMOS IBM, USA). Significant differences in the relative abundance of ASVs and pathways between taxonomic and predicted functional core rhizobiomes in the control vs. treated soils were calculated using the Welch’s t -test and the Benjamini–Hochberg False Discovery Rate (FDR) multiple-test correction using the R package “sgof.”
Root nutrient analysis Treatment with compost had a significant effect on root K, Mg, and Mn concentrations (Supplementary Fig. S ). The K concentration was significantly lower in roots from US-897 in treated soils compared to the control. Significantly greater Mg concentrations were detected in roots from US-802 and US-812 in treated soils compared to the controls. For US-802, US-812, and US-897, the Mn concentrations were also significantly greater in roots from the treated soils compared to the controls. Rootstock had a significant effect on Ca, S, and Mn concentrations (Supplementary Fig. S ). In control soils, roots from US-802 had significantly higher Ca concentrations compared to US-812. In soils treated with compost, significantly higher Ca concentrations were detected in roots from US-802 compared to X-639. The S concentrations were also significantly higher in roots from US-802 and US-812 compared to X-639. Significantly higher Mn concentrations were detected in roots from US-812 and US-897 compared to US-802. There were no significant differences in N, P, S, B, Zn, Fe, and Cu concentrations among rootstocks and treatments (Supplementary Fig. S ). Abundance and alpha- and beta-diversity of active rhizosphere bacterial communities Rootstock, treatment, and rootstock and treatment interaction had a significant effect on the abundance of rhizosphere bacteria (Supplementary Fig. S ). A significantly greater number of bacteria were detected in the rhizobiome of US-812 and US-897 compared to US-802 and X-639 in treated soils, whereas there were no differences in bacterial abundance between rootstocks in the control soils (Supplementary Fig. S ). Alpha-diversity was significantly affected by rootstock, treatment, and rootstock and treatment interaction (Fig. ). Compost application significantly increased the number of observed ASVs and the values of the Shannon and Simpson indices for US-812 and US-897 compared to the control soils (Fig. A). In the control soils, alpha-diversity was significantly greater in the rhizobiome of X-639 compared to US-812 and US-897 (Fig. A). In the treated soils, US-802 had significantly lower alpha-diversity compared to US-812 and US-897 (Fig. A). NMDS analysis on Bray-Curtis distance together with a PERMANOVA analysis showed significant differences in the composition of bacterial communities between treatments and rootstock and treatment interaction ( p < 0.001) and no significant differences between rootstocks ( p = 0.055) (Fig. B). A subsequent ANOSIM analysis showed there were no significant differences in beta-diversity between rootstocks for the control soils, but that the composition of the bacterial community significantly differed between rootstocks in the treated soil except for US-812 vs. X-639 and US-897 vs. X-639 (Supplementary Table S ). Bacterial community composition and differentially abundant taxa between rootstocks and treatments On average, Proteobacteria (48.25%), Acidobacteria (12.9%), Chloroflexi (8.5%), Cyanobacteria (6.4%), Bacteroidetes (6.1%), Actinobacteria (5.9%), and Planctomycetes (5.8%) were the most abundant bacterial phyla across all rootstocks and treatments (Supplementary Fig. S ). Active bacterial ASVs significantly enriched and depleted between treatments for each of the rootstocks were identified at the phylum (Supplementary Fig. S ) and genus (Fig. ) taxonomic levels. Regardless of the rootstock, compost application significantly increased the relative abundance of ASVs belonging to the phyla Firmicutes, Latescibacteria, Tectomicrobia, and candidate phyla GAL15 and FCPU426 compared to control soils (Supplementary Fig. S ). However, more abundant phyla such as Proteobacteria, Nitrospirae, Cyanobacteria, Chloroflexi, Bacteroidetes , Actinobacteria, and Acidobacteria had both enriched and depleted taxa within the same phyla in soils treated with compost compared to the controls, suggesting treatment effects on bacterial taxa assigned to these phyla were not phylum-specific (Supplementary Fig. S ). Significantly enriched (e.g., Acidothermus , Anaeromyxobacter , Aridibacter , Azohydromonas , Crinalium , Lysobacter , Pseudomonas , Nitrospira , Sphingobium , Sphingomonas , Planctomyces , Pedomicrobium , and Woodsholea ) and depleted genera (e.g., Caldithrix , Cupriavidus , and Nevskia ) in the treated soils were identified across all rootstocks compared to the control soils (Fig. ). Other genera had both significantly enriched and depleted ASVs such as Acidibacter , Bauldia , Bryobacter , Burkholderia , Devosia , Hyphomicrobium , Mesorhizobium , Microvirga , Varibacter , and Rhizomicrobium . Overall, US-812 and US-897 showed a greater proportion of enriched rather than depleted (78% and 22% and 75% and 25%, respectively) ASVs compared to US-802 (60% and 40%, respectively) and X-639 (62% and 38%, respectively). Potential contributions of differentially abundant active taxa to root nutrient concentrations All differently abundant active bacterial genera contributed to the variations in root nutrient concentrations (Fig. ). For example, genera belonging to Acidobacteria such as Aridibacter , Bryobacter , Candidatus Koribacter , and Candidatus Solibacter were found important and positively correlated with root Mg and Fe concentrations, whereas others such as Streptomyces were important for predicting changes in root N, Mg, Ca, S, Zn, and Fe concentrations. Genera assigned to Bacteriodetes , such as Chitinophaga , Flavisolibacter , Niastella , and Terrimonas , were important and positively correlated with root P concentrations. Callithrix (Calditrichaeota phylum) and Thermosporothrix (Chloroflexi) were positively correlated with root K and P, respectively (Fig. ). Genera belonging to Cyanobacteria, such as Leptolyngbya , Nostoc , Oscillatoria , and Microcoleus , were important for predicting changes in root N, P, and K concentrations and were positively correlated with these root nutrients. Bacillus and Fictibacillus (Firmicutes phylum) were positively correlated with root P, S, and Mn, whereas Nitrospira (Nitrospirae phylum) was important for predicting changes in root N and Fe (Fig. ). Genera belonging to Planctomycetes, such as Gemmata and Planctomyces , were important and positively correlated with Fe and Cu, whereas those assigned to Verrucomicrobia phylum (e.g., Candidatus Xiphinematobacter and Chthoniobacter ) were positively correlated with Zn. Within Proteobacteria, there were 41 genera that were important and positively or negatively correlated with all root nutrients (Fig. ). These included Burkholderia , Dongia , and Methylobacterium which were positively correlated with root Ca and Hyphomicrobium and Pedomicrobium which were negatively correlated with this root nutrient (Fig. ). Relationships between microbial diversity and the MNC The MNC index increased in soils treated with compost compared to the controls for US-812 and US-897 (Fig. ). There were significant positive relationships between bacterial alpha- and beta-diversity and MNC for US-812 and US-897 rootstocks (Fig. ). Concerning each component of the multinutrient cycling index, alpha- and beta-diversity significantly and positively correlated with root Mg and Mn concentrations for all rootstocks and with root Zn for US-812 and US-897 (Supplementary Fig. S A, B). Root K was significantly and negatively correlated with alpha- and beta-diversity for US-812 and US-897 (Supplementary Fig. S A, B). Root N, P, and Ca concentrations were significantly and positively correlated alpha- and beta-diversity for US-812. Root Cu was positively correlated with beta-diversity for all rootstocks (Supplementary Fig. S B). Predicted functional traits of active rhizosphere bacterial communities NMDS analysis on Bray-Curtis distance together with a PERMANOVA analysis showed significant differences in the predicted functionality of bacterial communities among rootstocks ( p = 0.002) and no significant differences between treatments ( p > 0.01) and rootstock and treatment interaction ( p > 0.01) (Fig. ). There were no significant differences in the mean proportion of predicted KEGG categories between rootstocks and treatments, and the categories of energy metabolism and biosynthesis of other secondary metabolites accounted for more than 60% of the predicted functions (Supplementary Fig. S A). There were only 5 predicted pathways with significant differences between rootstocks and treatments (Supplementary Fig. S B). Both in the control and treated soils, the pathways of biosynthesis of secondary metabolites and various plant secondary metabolites were significantly more abundant in the rhizobiome of US-802 and X-639 compared to US-812 and US-897. Carbon and nitrogen metabolism pathways were significantly more abundant in the treated soils compared to controls for US-812 and US-897. The pathway involved in tryptophan metabolism had a significantly greater relative abundance in soils treated with compost compared to control for US-802 and X-639 (Supplementary Fig. S B). Relationships between rootstock, compost, MNC, bacterial abundance, alpha- and beta-diversity, and predicted functionality Our SEM model explained 78%, 63%, 58%, 47%, and 43% of the variance found in MNC, beta-diversity, bacterial abundance, predicted functionality, and alpha-diversity (Fig. ). Rootstock and compost had significant positive effects on MNC and beta-diversity, with compost showing stronger impacts (Fig. ). Rootstock and compost had a significant positive effect on predicted functionality and bacterial abundance, respectively. Compost showed a significant positive effect on bacterial abundance and alpha-diversity (Fig. ). Identification of the active taxonomic and predicted functional core rhizobiome The taxonomic core rhizobiome was formed by bacterial taxa belonging to the same eleven genera in the control and treated soils and whose relative abundances did not significantly differ between treatments (Supplementary Table S ). The predicted functional core rhizobiome comprised the same thirteen pathways in the control and treated soils (Supplementary Table S ). However, eight of these pathways (tryptophan metabolism, nitrogen metabolism, carbohydrate metabolism, lipid metabolism, metabolism of other amino acids, metabolism of cofactors and vitamins, and xenobiotics and biodegradation metabolism) were significantly more abundant in the treated soils compared to the control (Supplementary Table S ).
Treatment with compost had a significant effect on root K, Mg, and Mn concentrations (Supplementary Fig. S ). The K concentration was significantly lower in roots from US-897 in treated soils compared to the control. Significantly greater Mg concentrations were detected in roots from US-802 and US-812 in treated soils compared to the controls. For US-802, US-812, and US-897, the Mn concentrations were also significantly greater in roots from the treated soils compared to the controls. Rootstock had a significant effect on Ca, S, and Mn concentrations (Supplementary Fig. S ). In control soils, roots from US-802 had significantly higher Ca concentrations compared to US-812. In soils treated with compost, significantly higher Ca concentrations were detected in roots from US-802 compared to X-639. The S concentrations were also significantly higher in roots from US-802 and US-812 compared to X-639. Significantly higher Mn concentrations were detected in roots from US-812 and US-897 compared to US-802. There were no significant differences in N, P, S, B, Zn, Fe, and Cu concentrations among rootstocks and treatments (Supplementary Fig. S ).
Rootstock, treatment, and rootstock and treatment interaction had a significant effect on the abundance of rhizosphere bacteria (Supplementary Fig. S ). A significantly greater number of bacteria were detected in the rhizobiome of US-812 and US-897 compared to US-802 and X-639 in treated soils, whereas there were no differences in bacterial abundance between rootstocks in the control soils (Supplementary Fig. S ). Alpha-diversity was significantly affected by rootstock, treatment, and rootstock and treatment interaction (Fig. ). Compost application significantly increased the number of observed ASVs and the values of the Shannon and Simpson indices for US-812 and US-897 compared to the control soils (Fig. A). In the control soils, alpha-diversity was significantly greater in the rhizobiome of X-639 compared to US-812 and US-897 (Fig. A). In the treated soils, US-802 had significantly lower alpha-diversity compared to US-812 and US-897 (Fig. A). NMDS analysis on Bray-Curtis distance together with a PERMANOVA analysis showed significant differences in the composition of bacterial communities between treatments and rootstock and treatment interaction ( p < 0.001) and no significant differences between rootstocks ( p = 0.055) (Fig. B). A subsequent ANOSIM analysis showed there were no significant differences in beta-diversity between rootstocks for the control soils, but that the composition of the bacterial community significantly differed between rootstocks in the treated soil except for US-812 vs. X-639 and US-897 vs. X-639 (Supplementary Table S ).
On average, Proteobacteria (48.25%), Acidobacteria (12.9%), Chloroflexi (8.5%), Cyanobacteria (6.4%), Bacteroidetes (6.1%), Actinobacteria (5.9%), and Planctomycetes (5.8%) were the most abundant bacterial phyla across all rootstocks and treatments (Supplementary Fig. S ). Active bacterial ASVs significantly enriched and depleted between treatments for each of the rootstocks were identified at the phylum (Supplementary Fig. S ) and genus (Fig. ) taxonomic levels. Regardless of the rootstock, compost application significantly increased the relative abundance of ASVs belonging to the phyla Firmicutes, Latescibacteria, Tectomicrobia, and candidate phyla GAL15 and FCPU426 compared to control soils (Supplementary Fig. S ). However, more abundant phyla such as Proteobacteria, Nitrospirae, Cyanobacteria, Chloroflexi, Bacteroidetes , Actinobacteria, and Acidobacteria had both enriched and depleted taxa within the same phyla in soils treated with compost compared to the controls, suggesting treatment effects on bacterial taxa assigned to these phyla were not phylum-specific (Supplementary Fig. S ). Significantly enriched (e.g., Acidothermus , Anaeromyxobacter , Aridibacter , Azohydromonas , Crinalium , Lysobacter , Pseudomonas , Nitrospira , Sphingobium , Sphingomonas , Planctomyces , Pedomicrobium , and Woodsholea ) and depleted genera (e.g., Caldithrix , Cupriavidus , and Nevskia ) in the treated soils were identified across all rootstocks compared to the control soils (Fig. ). Other genera had both significantly enriched and depleted ASVs such as Acidibacter , Bauldia , Bryobacter , Burkholderia , Devosia , Hyphomicrobium , Mesorhizobium , Microvirga , Varibacter , and Rhizomicrobium . Overall, US-812 and US-897 showed a greater proportion of enriched rather than depleted (78% and 22% and 75% and 25%, respectively) ASVs compared to US-802 (60% and 40%, respectively) and X-639 (62% and 38%, respectively).
All differently abundant active bacterial genera contributed to the variations in root nutrient concentrations (Fig. ). For example, genera belonging to Acidobacteria such as Aridibacter , Bryobacter , Candidatus Koribacter , and Candidatus Solibacter were found important and positively correlated with root Mg and Fe concentrations, whereas others such as Streptomyces were important for predicting changes in root N, Mg, Ca, S, Zn, and Fe concentrations. Genera assigned to Bacteriodetes , such as Chitinophaga , Flavisolibacter , Niastella , and Terrimonas , were important and positively correlated with root P concentrations. Callithrix (Calditrichaeota phylum) and Thermosporothrix (Chloroflexi) were positively correlated with root K and P, respectively (Fig. ). Genera belonging to Cyanobacteria, such as Leptolyngbya , Nostoc , Oscillatoria , and Microcoleus , were important for predicting changes in root N, P, and K concentrations and were positively correlated with these root nutrients. Bacillus and Fictibacillus (Firmicutes phylum) were positively correlated with root P, S, and Mn, whereas Nitrospira (Nitrospirae phylum) was important for predicting changes in root N and Fe (Fig. ). Genera belonging to Planctomycetes, such as Gemmata and Planctomyces , were important and positively correlated with Fe and Cu, whereas those assigned to Verrucomicrobia phylum (e.g., Candidatus Xiphinematobacter and Chthoniobacter ) were positively correlated with Zn. Within Proteobacteria, there were 41 genera that were important and positively or negatively correlated with all root nutrients (Fig. ). These included Burkholderia , Dongia , and Methylobacterium which were positively correlated with root Ca and Hyphomicrobium and Pedomicrobium which were negatively correlated with this root nutrient (Fig. ).
The MNC index increased in soils treated with compost compared to the controls for US-812 and US-897 (Fig. ). There were significant positive relationships between bacterial alpha- and beta-diversity and MNC for US-812 and US-897 rootstocks (Fig. ). Concerning each component of the multinutrient cycling index, alpha- and beta-diversity significantly and positively correlated with root Mg and Mn concentrations for all rootstocks and with root Zn for US-812 and US-897 (Supplementary Fig. S A, B). Root K was significantly and negatively correlated with alpha- and beta-diversity for US-812 and US-897 (Supplementary Fig. S A, B). Root N, P, and Ca concentrations were significantly and positively correlated alpha- and beta-diversity for US-812. Root Cu was positively correlated with beta-diversity for all rootstocks (Supplementary Fig. S B).
NMDS analysis on Bray-Curtis distance together with a PERMANOVA analysis showed significant differences in the predicted functionality of bacterial communities among rootstocks ( p = 0.002) and no significant differences between treatments ( p > 0.01) and rootstock and treatment interaction ( p > 0.01) (Fig. ). There were no significant differences in the mean proportion of predicted KEGG categories between rootstocks and treatments, and the categories of energy metabolism and biosynthesis of other secondary metabolites accounted for more than 60% of the predicted functions (Supplementary Fig. S A). There were only 5 predicted pathways with significant differences between rootstocks and treatments (Supplementary Fig. S B). Both in the control and treated soils, the pathways of biosynthesis of secondary metabolites and various plant secondary metabolites were significantly more abundant in the rhizobiome of US-802 and X-639 compared to US-812 and US-897. Carbon and nitrogen metabolism pathways were significantly more abundant in the treated soils compared to controls for US-812 and US-897. The pathway involved in tryptophan metabolism had a significantly greater relative abundance in soils treated with compost compared to control for US-802 and X-639 (Supplementary Fig. S B).
Our SEM model explained 78%, 63%, 58%, 47%, and 43% of the variance found in MNC, beta-diversity, bacterial abundance, predicted functionality, and alpha-diversity (Fig. ). Rootstock and compost had significant positive effects on MNC and beta-diversity, with compost showing stronger impacts (Fig. ). Rootstock and compost had a significant positive effect on predicted functionality and bacterial abundance, respectively. Compost showed a significant positive effect on bacterial abundance and alpha-diversity (Fig. ).
The taxonomic core rhizobiome was formed by bacterial taxa belonging to the same eleven genera in the control and treated soils and whose relative abundances did not significantly differ between treatments (Supplementary Table S ). The predicted functional core rhizobiome comprised the same thirteen pathways in the control and treated soils (Supplementary Table S ). However, eight of these pathways (tryptophan metabolism, nitrogen metabolism, carbohydrate metabolism, lipid metabolism, metabolism of other amino acids, metabolism of cofactors and vitamins, and xenobiotics and biodegradation metabolism) were significantly more abundant in the treated soils compared to the control (Supplementary Table S ).
We found that the rootstock genotype determined differences in the diversity of active rhizosphere bacterial communities. The rootstock genotype also impacted how compost altered the abundance, diversity, and composition and predicted functions of these active communities. Variations in the active bacterial rhizobiome were strongly linked to root nutrient cycling, and these interactions were root-nutrient- and rootstock-specific. Together, these findings have important agronomic implications as they indicate the potential for agricultural production systems to maximize benefits from rhizobiomes through the choice of selected rootstocks and the application of compost. Direct positive relationships between enriched taxa in treated soils and specific root nutrients were detected which will help identify potentially important taxa for development of agricultural microbiome engineering solutions to improve root nutrient uptake. We also found significant differences in specific predicted functions related to soil nutrient cycling (C, N, and tryptophan metabolisms) in the active bacterial rhizobiome among rootstocks, particularly in soils treated with compost. These results suggest that potential functions of active bacterial rhizobiomes are rootstock-specific rather than redundant among citrus rootstocks. The rootstock genotype determined differences in bacterial diversity but not community composition of the active bacterial rhizobiome in untreated soils. Previous studies have shown that the root metabolic composition can differ among citrus rootstocks [ – ], which may explain the differences in bacterial diversity among rootstocks in this study. The finding that composition (beta-diversity) remained unchanged among rootstocks in untreated soil suggests no or only a minor influence of rootstocks on the recruitment of bacterial communities in the rhizosphere, which agrees with previously published studies . In addition to influencing the bacterial diversity, the rootstocks used in this study were found to directly influence nutrient cycling through alterations of the active rhizobiome. For example, root N, P, and Ca concentrations were significantly and positively correlated with alpha- and beta-diversity for US-812, but not the other rootstocks. Other root nutrients such as Mg and Cu were positively correlated with alpha- and beta-diversity for all rootstocks, suggesting no rootstock effect but a key role of the active bacterial rhizobiome in driving root nutrient cycling. Magnesium is essential for increasing root system and fruit quality as it promotes the reduction of reactive oxygen species (ROS) and distribution of sugars in the plant . Copper is required for plant growth and development as it is involved in different physiological processes such as photosynthesis, respiration, and ethylene . Regardless of the rootstock genotype, compost application altered the composition of the active bacterial rhizobiome. These compost-driven differences in beta-diversity could be due to the bacterial community shifting from oligotrophic to more copiotrophic bacterial taxa in treated soils, as previously observed in the rhizosphere of apple rootstocks treated with compost . For example, a proliferation of known fast-growing copiotrophic consumers of labile C (e.g., Actinobacteria, Bacteroidetes , Chloroflexi, Gemmatimonadetes, and Firmicutes) was observed for all rootstocks in soils treated with compost. These variations in beta-diversity of the active bacterial rhizobiome did not affect core rhizobiome taxa, which suggests that other microbes within the rhizobiome were more responsive to compost application. However, compost application only increased the bacterial abundance and alpha-diversity of the rhizobiome of US-812 and US-897 but not US-802 and X-639. Significant positive correlations between increased bacterial diversity and root multinutrient cycling were detected for US-812 and US-897 rootstocks, suggesting a rootstock-specific impact of compost on the rhizobiome community composition that in turn influences root nutrient cycling. Previous studies have also linked increased soil microbial abundance and diversity to nutrient availability after compost application . Interestingly, US-897 and US-812 are known for their positive influence on fruit quality, whereas US-802 and X-639 are known to produce lower-quality fruit . Whether this effect will be enhanced with compost amendments will need to be investigated as the trees become more mature. The interaction between citrus rootstocks and compost was a stronger determinant of changes in bacterial abundance, diversity, and community composition of the active bacterial rhizobiome than compost or rootstocks alone. While recent studies have shown that rootstocks and compost can alter microbial diversity and community composition in the rhizobiome of different crops, our results provide strong evidence of compost and rootstock interactions driving changes in the active rhizobiome (alpha- and beta-diversity) with direct impacts on root nutrient availability. Although recent studies have shown that soil microbial diversity promotes multifunctionality in natural ecosystems [ – ], these observations were mainly restricted to nutrient cycling in bulk soils. Here, we expand on those findings by showing that these interactions also occur in the rhizosphere where they can be controlled not only by the rootstock genotype but also the application of compost. For instance, we observed a strong positive correlation between Zn and Mn root concentrations and alpha- and beta-diversity in the rhizobiome of US-812 and US-897 rootstocks. Zn is a micronutrient with a key role in plant defense against pathogens , whereas Mn is essential for photosynthesis and a limiting factor for plant growth . Although our results suggest that the rhizobiome composition improves root nutrient cycling, it is uncertain whether this ultimately translates into increased plant growth, crop production, and stress and disease tolerance in the longer term. At the time of the study, no differences in tree growth and health were observed with the compost amendment, but US-897 produced the most fruit in the first year of production, while US-802 was the most vigorous rootstock (data not shown). These results are expected in this early stage of growth, and it may take several years of treatments and until trees reach full maturity before increases in productivity due to any microbe-induced effect may be observed. Specific active genera in the rhizobiome of composted soils were strongly correlated with root nutrient concentrations. Some of these genera include known plant growth-promoting (PGP) bacteria such as Bacillus , Streptomyces , Pseudomonas , Mesorhizobium , Sphingomonas , and Rhizobium that can solubilize nutrients such as P, S, and Ca and produce diverse phytohormones and siderophores. Although correlation does not imply causation, we found significant associations between several other genera and specific root nutrients in the citrus rhizobiome. For example, members of Acidobacteria were correlated with root Fe, which agrees with several studies reporting that Acidobacteria are avid rhizosphere colonizers and can produce siderophores . We also found strong correlations between members of Bacteroidetes and root P concentration which is in line with previous observations of genera assigned to Bacteroidetes playing a critical role in solubilization of P in the plant rhizosphere . Cyanobacteria genera such as Leptolyngbya , Nostoc , Oscillatoria , and Microcoleus were important for predicting changes in root N, P, and K concentrations which is not surprising as Cyanobacteria are known to improve the availability of N, P, and K through N-fixation and solubilization . We also identified genera assigned to Planctomycetes which were correlated with Fe, which is in accordance with previous studies showing their ability to produce siderophores in soils . Together, this knowledge provides valuable information for selecting candidate taxa for future agricultural microbiome engineering solutions . For example, members of the differentially abundant genera in this study may represent candidate taxa for designing microbial consortia with a potential to serve as biofertilizers . Most predicted functions in the rhizobiome were shared among citrus rootstocks, thus supporting the concept of functional redundancy between plant genotypes of the same crop . However, there were significant differences in functional pathways related to biosynthesis of secondary metabolites and C, N, and tryptophan metabolisms among rootstocks in untreated soils, and compost application increased the abundance of these potential functions; however, the magnitude of the responses was rootstock-specific. Overall, these results are different from those of a recent study examining predicted functions in the rhizobiome of different rootstocks for grapevines . That study found no differences in predicted functions between grapevine rootstocks using the Tax4fun tool to predict functional potential. As recently demonstrated, Tax4fun and PICRUSt2 can lead to differences in predicted bacterial functions which could explain the different results between rhizobiome studies . While Marasco et al. used a DNA-based approach to characterize rhizobiome communities for grapevines, we used RNA-based estimates to predict bacterial functions which can be a more accurate and reliable approach for functional predictions in rhizobiomes . In addition, it cannot be ruled out that genotype-specific root exudates determine rhizobiome functions . Interestingly, we detected eight pathways within the predicted functional core citrus rhizobiome that were more abundant in treated soils compared to the control for all rootstocks. These pathways were related to key functions for plant growth such as N, carbohydrate and lipid metabolisms, and metabolism of cofactors and vitamins. While compost had no impact on the core taxonomic rhizobiome of all rootstocks, it appeared to influence the taxonomic and predicted functional core rhizobiomes. Although PICRUSt2 is frequently used to predict functions of microbial communities and its effectiveness has been established in multiple environmental studies that utilized both amplicon sequencing and metagenome sequencing , we acknowledge it has some limitations , and other approaches such as shotgun metagenome sequencing can provide more accurate functional profiles of microbiomes. However, our results provide a good starting place for future studies of functional differences between rhizobiomes under the influence of different rootstock genotypes.
This study showed that the interaction between citrus rootstocks and compost can influence active rhizosphere bacterial communities with impacts on root nutrient concentrations. In particular, the response of the rhizobiome bacterial abundance, diversity, and community composition to compost was rootstock-specific. Specific bacterial taxa therefore appear to be driving changes in root nutrient concentrations in the active rhizobiome of different citrus rootstocks. Whether rootstock genotype-specific impacts on rhizosphere microbes also determine variations in nutrient concentration in rhizosphere soil and other parts of the tree (e.g., leaves and trunk) should be explored in future studies. In addition, several potential functions of active bacterial rhizobiomes recruited by different citrus rootstocks did not appear to be redundant but rather rootstock-specific. Longer-term studies will determine to what extent rhizobiome alterations impact aboveground traits, especially tree growth and productivity but also resilience to HLB. The study of root exudate composition could also help identify associations of individual taxa with specific root exudate compounds and provide an understanding of how rootstocks and compost control these relationships.
Additional file 1: Fig. S1. Schematic diagram of the field study illustrating the experimental design (A); an untreated control plot (left) and a compost-treated plot (right) – trees are arranged in two rows on raised beds separated by furrows for drainage (B); a grafted citrus tree composed of scion and rootstock that are united at the graft union (C). Fig. S2. A priori generic structural equation model (SEM) used in this study. The numbers in the arrows denote example references used to support our predictions (see References section). Fig. S3. Root nutrient content of citrus trees on four different rootstocks. Soils were untreated (control) or treated with compost. Different letters above the bars indicate significant differences between rootstocks and treatments (linear mixed-effect model and Tukey's HSD; n = 8; *, p ≤ 0.05; **, p ≤ 0.01; ***, p ≤ 0.001). Values are expressed as mean with standard error. Fig. S4. Total abundance of active bacterial communities in the rhizosphere of citrus trees on four different rootstocks. Soils were untreated (control) or treated with compost. Different letters above the bars indicate significant differences between rootstocks and treatments (linear mixed-effect model and Tukey's HSD, n = 8; *, p ≤ 0.05; **, p ≤ 0.01; ***, p ≤ 0.001). Values are expressed as mean with standard error. Fig. S5. Relative abundance of bacterial ASVs at the phylum taxonomic level in the rhizosphere of four different rootstocks. Soils were untreated (control) or treated with compost. Fig. S6. Differentially abundant ASVs at the genus taxonomic level between compost and control treatments for each rootstock. The fold change is shown on the X axis and genera are listed on the Y axis. Each colored dot represents an ASV that was identified by DESeq2 analysis as significantly differentially abundant ( p ≤ 0.05). Fig. S7. Heatmaps of Spearman correlation coefficients between bacterial alpha (A) and beta (B) diversity and root nutrients for each rootstock. The shading from blue to red represents low-to-high positive correlation. *, p ≤ 0.05; **, p ≤ 0.01; ***, p ≤ 0.001. Fig. S8. Mean proportion of predicted KEGG categories (A) and pathways (B) in the rhizosphere of citrus trees on four different rootstocks. Soils were untreated (control) or treated with compost. For each row, different letters indicate significant differences between treatments and rootstocks (Tukey's HSD, p < 0.05; n = 8). Table S1. Rootstocks used in this study and their parentage. Table S2. Significance and similarity using the non-parametric multivariate ANOSIM statistical method. Numbers in bold indicate significant effect at p < 0.05. R values close to 1 indicate dissimilarity between treatments. Table S3. ASVs (at the genus level) present in at least 75% of the samples in the control and treated soils identified as the active taxonomic core rhizobiome and their relative abundances. For each row, different letters between treatments indicate significant according to the Welch’s t-test and Benjamini–Hochberg FDR multiple test correction ( p < 0.05). Table S4. KEGG pathways present in at least 75% of the samples in the control and treated soils identified as the active functional core rhizobiome and their relative abundances. For each row, different letters between treatments indicate significant according to the Welch’s t-test and Benjamini–Hochberg FDR multiple test correction ( p < 0.05)
|
Клинические рекомендации «Врожденный гипотиреоз» | ed4e5097-4a9c-42ca-b193-28b6fa8f9693 | 9764271 | Physiology[mh] | ВГ — врожденный гипотиреозГГС — гипоталамо-гипофизарная системаГП — гипопитуитаризмЛГ — лютеинизирующий гормонМРТ — магнитно-резонансная томографияППР — преждевременное половое развитиеТГ — тиреоглобулинТРГ — тиреотропин-рилизинг-гормонТТГ — тиреотропный гормонТАБ — тонкоигольная аспирационная биопсияУЗИ — ультразвуковое исследованиеФСГ — фолликулостимулирующий гормонЧСС — частота сердечных сокращенийЩЖ — щитовидная железаЭКГ — электрокардиографияЭхоКГ — эхокардиографияSD — стандартное отклонениеSDS — коэффициент стандартного отклоненияT3 — трийодтиронинT4 — тироксинЦНС — центральная нервная системаIQ — intelligence quotient (c англ. коэффициент интеллекта)WISC — Wechsler Intelligence Scale for Children (тест Векслера)
Гипотироксинемия — недостаточность тиреоидных гормонов. Неонатальный скрининг — массовое обследование всех новорожденных детей на гипотиреоз с определением ТТГ в капиллярной крови, позволяющее выявить большинство случаев заболевания на доклиническом этапе и своевременно назначать заместительную терапию. Дисгенезия ЩЖ — структурное нарушение щитовидной железы, связанное с дефектом эмбрионального развития тиреоидной ткани, проявляющееся аплазией, гемиагенезией, гипоплазией или эктопией (дистопией). Дисгормоногенез — нарушение выработки или транспорта тиреоидных гормонов, возникающее вследствие ферментативных дефектов (органификации йода, синтеза тиреоглобулина, тиреопероксидазы и т. д.). Первичный гипотиреоз — клинический синдром, развивающийся вследствие недостаточной продукции тиреоидных гормонов по причине первичной патологии в самой ЩЖ. Вторичный гипотиреоз — клинический синдром, развивающийся вследствие недостаточной продукции ТТГ при отсутствии первичной патологии самой ЩЖ, приводящей к снижению ее функции. Транзиторный гипотиреоз — состояние временной гипотироксинемии, сопровождающееся повышением ТТГ.
Врожденный гипотиреоз (ВГ) — одно из наиболее часто встречающихся врожденных заболеваний ЩЖ у детей, в основе которого лежит полная или частичная недостаточность тиреоидных гормонов, приводящая к задержке развития всех органов и систем организма при отсутствии своевременно начатого лечения .
ВГ — гетерогенная по этиологии группа заболеваний, обусловленных чаще всего морфофункциональной незрелостью ЩЖ, реже — гипоталамо-гипофизарной системы (ГГС). Гипотироксинемия приводит к развитию метаболических нарушений, снижению скорости окислительных процессов и активности ферментных систем, повышению трансмембранной клеточной проницаемости и накоплению в тканях недоокисленных продуктов обмена. Дефицит тиреоидных гормонов грубо нарушает процессы роста, дифференцировки всех тканей и систем организма. Больше других от недостатка тиреоидных гормонов у ребенка страдает центральная нервная система. Низкий уровень тиреоидных гормонов, особенно в первые месяцы жизни, приводит к задержке процессов миелинизации нервных волокон, снижению накопления липидов и гликопротеидов в нервной ткани, что в итоге вызывает морфофункциональные нарушения в мембранах нейронов проводящих путей мозга. Необратимость повреждений ЦНС при врожденном гипотиреозе в условиях отсутствия лечения связана с особенностями роста и созревания головного мозга новорожденного. В период максимального роста и активного нейрогенеза, который приходится на первые 6 мес жизни ребенка, мозг оказывается особенно чувствителен к неблагоприятным воздействиям, в том числе и к недостатку тироксина. Поэтому тиреоидная недостаточность в критическом периоде наиболее быстрого развития ЦНС задерживает ее созревание, приводя к необратимой умственной отсталости [2–6].
Частота врожденного гипотиреоза колеблется от 1:3000–4000 новорожденных в Европе и Северной Америке до 1:6000–7000 новорожденных в Японии. У лиц негроидной расы заболевание встречается достаточно редко (примерно 1:30 000), а у латиноамериканцев, напротив, часто (1:2000). У девочек заболевание встречается в 2–2,5 раза чаще, чем у мальчиков. Распространенность ВГ в Российской Федерации, по результатам неонатального скрининга, составляет 1 случай на 3600 новорожденных (1997–2015) .
E03.0 Врожденный гипотиреоз с диффузным зобом. Зоб (нетоксический) врожденный паренхиматозный. Е03.1 Врожденный гипотиреоз без зоба. Аплазия щитовидной железы (с микседемой). Врожденная атрофия щитовидной железы. Е07.1 Дисгормональный зоб. Семейный дисгормональный зоб. Синдром Пендреда. Е07.8 Другие уточненные болезни щитовидной железы. Дефект тироксинсвязывающего глобулина. Кровоизлияние в щитовидную железу. Инфаркт щитовидной железы. Синдром нарушения эутиреоза.
По уровню поражения (наиболее распространенная на сегодняшний день). Первичный гипотиреоз. Дисгенезия щитовидной железы (нарушение строения и закладки): Дисгормоногенез (нарушение синтеза тиреоидных гормонов): Центральный гипотиреоз (вторичный, третичный): Периферическая резистентность к тиреоидным гормонам (мутации генов TRHA и TRHB). Транзиторный гипотиреоз. По степени тяжести. Латентный (субклинический) — повышенный уровень ТТГ при нормальном уровне свободного тироксина (T4). Манифестный — гиперсекреция ТТГ при сниженном уровне свободного T4, наличие клинических проявлений. Тяжелого течения (осложненный), при котором может быть кретинизм, сердечная недостаточность, выпот в серозные полости, вторичная аденома гипофиза. По степени компенсации: Осложненный гипотиреоз (как правило, не распознанные вовремя, запущенные случаи заболевания) без своевременно назначенной и правильно подобранной заместительной медикаментозной терапии может привести к развитию гипотиреоидной (микседематозной) комы. В подавляющем большинстве случаев (85–90%) имеет место первичный ВГ. Среди первичного гипотиреоза 85% случаев являются спорадическими, большинство из них обусловлено дисгенезией (эмбриопатией) щитовидной железы (ЩЖ). По данным различных авторов, агенезия ЩЖ встречается в 22–42% случаев, в 35–42% случаев ткань железы эктопирована, в 24–36% имеет место гипоплазия ЩЖ [8–12]. Гораздо реже (5–10%) встречаются вторичный или третичный ВГ, проявляющиеся изолированным дефицитом ТТГ или гипопитуитаризмом (ГП) [9–13]. Особой формой ВГ является транзиторный гипотиреоз новорожденных. Эта форма заболевания чаще всего наблюдается в регионах, эндемичных по недостатку йода. Транзиторный гипотиреоз может возникнуть и в результате незрелости системы органификации йода, особенно у недоношенных, незрелых новорожденных. К развитию транзиторного гипотиреоза у новорожденного может приводить прием матерью во время беременности тиреостатических и других препаратов, нарушающих синтез тиреоидных гормонов ЩЖ плода. Описана трансплацентарная передача материнских блокирующих антител к рецептору ТТГ . В связи с развитием методов молекулярно-генетического анализа взгляды на этиологию врожденного гипотиреоза в последние годы во многом изменились. На сегодняшний день идентифицирован ряд генов, мутации которых приводят к нарушениям закладки, миграции, дифференцировки ЩЖ, дефектам синтеза тиреоидных гормонов, нарушениям гипоталамо-гипофизарной оси. Отсутствие специфических симптомов, характерных для определенного генетического дефекта, не позволяет проводить изолированную диагностику одного гена для идентификации мутации. Наиболее широко изучены варианты дисгенезии ЩЖ, однако показано, что нарушение закладки этого жизненно важного органа ассоциировано с мутациями генов только в 2% случаев, в остальных случаях причина остается неизвестной. В структуре наследственных форм заболевания преобладающими причинами развития ВГ являются дефекты генов дисгормоногенеза, о чем свидетельствуют результаты молекулярно-генетического анализа [10–17].
Клинические проявления гипотиреоза. Клинические проявления и течение гипотиреоза существенно различаются у лиц разного возраста [14–18]. В детском возрасте они зависят от периода манифестации заболевания, длительности синдрома гипотиреоза и сроков начала заместительной терапии. На 1-м месяце жизни ребенка, когда ранняя диагностика крайне важна, типичная клиническая картина ВГ наблюдается всего в 10–15% случаев. ВГ у новорожденных проявляется следующими симптомами: У детей более старшего возраста (после 5–6 мес) клинические проявления гипотиреоза схожи с проявлениями у взрослых. Помимо этого, при отсутствии лечения у детей с ВГ на первый план выступает нарастающая задержка психомоторного, физического, а затем и полового развития. Отстает развитие моторики: Кожные покровы: Характерен комплекс респираторных симптомов: Выражены: Характерны: Характерны: Характерны: Транзиторный гипотиреоз новорожденных — состояние временной (преходящей) гипотироксинемии, сопровождающееся повышением уровня ТТГ в крови. Транзиторное повышение уровня ТТГ в большинстве случаев связано с функциональной незрелостью гипоталамо-гипофизарной системы в постнатальном периоде. Данное состояние чаще всего встречается в следующих случаях: На этапе первичного скрининга практически невозможно различить врожденный и транзиторный гипотиреоз. Разграничение этих состояний необходимо проводить на II этапе скрининга, то есть в поликлинических условиях, при повторном определении концентраций ТТГ и свободного T4 в сыворотке на фоне отмены заместительной терапии. Предикторы транзиторного гипотиреоза: Вторичный гипотиреоз чаще всего является следствием гипопитуитаризма (ГП), поэтому наличие других типичных симптомов ГП (пороки развития головного мозга и черепа, гипогликемии, микропения, крипторхизм у мальчиков) позволяет заподозрить правильный диагноз. Вторичный гипотиреоз, обусловленный нарушением функции аденогипофиза или гипоталамуса (мутации генов Pit-1, PROP-1), сопровождается дефицитом не только ТТГ, но и других тропных гормонов. Врожденный изолированный дефицит ТТГ — крайне редкое аутосомно-рецессивное заболевание, причиной которого являются мутации гена TSH α- и β-субъединиц [15–17][19–23]. По сравнению с первичным, вторичный гипотиреоз характеризуется более стертой и мягкой клинической картиной. При вторичном гипотиреозе концентрации общего и свободного T4 снижены, а уровень ТТГ может быть умеренно повышенным, нормальным или сниженным. Транзиторный вторичный гипотиреоз чаще выявляют у недоношенных и маловесных новорожденных. Он может быть обусловлен незрелостью ГГС или ГП. Отличить истинный вторичный гипотиреоз от транзиторного вторичного гипотиреоза очень сложно. Снижение уровней T4 и T3 у недоношенных новорожденных отражает их адаптацию к стрессу и не является показанием для заместительной терапии тиреоидными гормонами. К 1–2-му месяцу жизни уровни T4 и T3 в сыворотке постепенно увеличиваются и достигают нормальных значений, характерных для доношенных детей того же возраста. Истинные нарушения функции ЩЖ у таких детей можно выявить после нормализации их веса и развития .
Критерии установления диагноза. Основная цель скрининга на ВГ — максимально раннее выявление всех новорожденных с повышенным уровнем ТТГ в крови. Дети с аномально высоким уровнем ТТГ требуют в дальнейшем углубленного обследования для правильной диагностики заболевания [21–32]. Рекомендуется диагностировать ВГ у новорожденных согласно результатам неонатального скрининга на ВГ и/или исследования уровня ТТГ в крови, уровня Т4св.) сыворотки крови [21–27] (УУР — С; УДД — 3). Рекомендуется исследование уровня ТТГ в крови у новорожденного не позднее 5 сут жизни (оптимальный срок — полные 3-и сутки) в пятне цельной крови [21–27] (УУР — С; УДД — 4). Рекомендовано обследование и дальнейшее наблюдение детей в три этапа [21–27]. I этап — родильный дом, стационар, детская поликлиника. У всех доношенных новорожденных анализ крови на скрининг (капиллярная кровь из пятки) берут не позднее 5-х суток жизни (оптимально по прошествии полных 3-х суток с момента рождения), у недоношенных детей — на 7 и 14-й день жизни; капли (в количестве 6–8 капель) наносятся на специальную пористую фильтровальную бумагу. Все образцы крови отсылают в специализированную медико-генетическую лабораторию. II этап — медико-генетическая лаборатория. В лаборатории проводят определение концентрации ТТГ в сухих пятнах крови. Для диагностики ВГ применяется массовое определение ТТГ в капиллярной крови (например, АвтоДелфия Нео-тиреотропный гормон, метод флюориметрического анализа). Пороговые значения ТТГ определяются наборами применяемых тест систем в каждой лаборатории. 1. ТТГ капиллярной крови менее 9 мЕд/л у доношенного ребенка в возрасте 4–14 дней считается нормой. 2. ТТГ капиллярной крови выше 9 мЕд/л у доношенного ребенка в возрасте 4–14 дней требует повторного определения ТТГ из того же образца крови, при получении аналогичного результата проводят срочное уведомление ЛПУ для повторного забора крови (ретест) и доставки образца капиллярной крови в лабораторию неонатального скрининга для определения уровня ТТГ. А. ТТГ капиллярной крови от 9,0 до 40,0 мЕд/л: в лаборатории повторно определяют ТТГ из того же образца крови, при получении аналогичного результата проводят срочное уведомление поликлиники и забор венозной крови для определения ТТГ и Т4св. в сыворотке или ретестирование (повторный забор капиллярной крови). Б. ТТГ капиллярной крови более 40,0 мЕд/л: в лаборатории повторно определяют ТТГ из того же образца крови, при получении аналогичного результата — проводят срочное уведомление поликлиники и забор венозной крови для определения ТТГ и свободного Т4 в сыворотке. Не дожидаясь результатов, назначается заместительная терапия тиреоидными препаратами, при невозможности получения результатов в день забора крови. Если полученные результаты окажутся в пределах нормальных значений, терапия будет отменена . Интерпретация результатов ретестирования в капиллярной крови: Интерпретация результатов исследования венозной крови (уточняющая диагностика): III этап — детская поликлиника. На этом этапе за детьми с ВГ, выявленным по результатам неонатального скрининга, ведется динамическое наблюдение врачами-детскими эндокринологами (УУР — С; УДД — 4).
Жалобы: В анамнезе — переношенная беременность.
Жалобы:
Жалобы:
Жалобы: Рекомендовано: сбор подробного анамнеза и жалоб у пациента для правильной постановки диагноза и назначения лечения. Необходимо обращать внимание на клинические симптомы гипотиреоза (УУР — С; УДД — 5).
ВГ у новорожденных проявляется следующими симптомами: Рекомендуется для диагностики ВГ у новорожденных педиатрам, неонатологам и эндокринологам использовать шкалу Апгар для детей в ВГ, помогающую заподозрить заболевание в ранние сроки. При сумме баллов более 5 следует заподозрить ВГ (УУР — С; УДД — 5).
Гормональные исследования. Пациентам с ВГ рекомендовано: Дополнительные гормональные исследования. Пациентам с ВГ рекомендовано по показаниям: Ультразвуковое исследование ЩЖ. УЗИ ЩЖ рекомендовано пациентам с ВГ: Сцинтиграфия ЩЖ (с натрия пертехнетатом [ 99mTc]). Сцинтиграфия ЩЖ рекомендована пациентам с ВГ при аплазии или эктопии по результатам УЗИ ЩЖ: Можно проводить всем детям с ВГ, независимо от возраста, в том числе новорожденным. Если при проведении сцинтиграфии ЩЖ не визуализируется, диагноз не вызывает сомнений. Этот метод исследования (в отличие от УЗИ) позволяет выявить дистопически расположенную ткань ЩЖ (УУР — С; УДД-4). Выполнение данной процедуры не должно мешать своевременном старту лечения левотироксином натрия. Оно проводится в короткие сроки до начала или в течение 7 дней после инициации заместительной гормональной терапии, либо на фоне отмены терапии в течение 2–3 нед. Применение натрия пертехнетата [ 99mTc] обосновано свойством клеток ЩЖ накапливать данный радиофармацевтический препарат (наблюдаемый максимум накопления с 10-й по 30-ю минуту после введения), подобно йоду (в синтезе тиреоидных гормонов натрия пертехнетат [ 99mTc] не участвует, так как не подвергается органификации). Технеций обладает коротким периодом полураспада (~6 ч) и, соответственно, достаточно быстро полностью выводится из организма. В настоящее время накоплен длительный опыт (с 1960-х гг.) использования натрия пертехнетата [ 99mTc] в педиатрической практике при многих нозологиях и доказана его безопасность. По сравнению с натрия йодогиппуратом, 123-I, натрия пертехнетат [ 99mTc] применяется значительно чаще, его использование оправдано в первую очередь меньшей лучевой нагрузкой на организм, а также более низкой ценой и доступностью. Установлено, что рудиментарная ткань ЩЖ при ее дистопии способна достаточно длительно продуцировать тиреоидные гормоны, ее функциональная активность значительно снижается после десятилетнего возраста. В этих случаях может быть диагностирован ВГ с поздними проявлениями (поздняя форма ВГ). Существуют различные варианты дистопии ЩЖ: в корень языка или по ходу тиреоглоссального протока, при этом может наблюдаться самая различная степень тяжести ВГ . Молекулярно-генетическое исследование. Молекулярно-генетическое исследование рекомендовано пациентам с ВГ после медико-генетического консультирования в семейных случаях заболевания или при сочетании с другой органной патологией: Показана высокая значимость молекулярно-генетического исследования для установки точного этиологического диагноза, результаты которого могут быть использованы при проведении пренатальной диагностики в случае подтверждения биаллельных мутаций или при доказанном доминантном наследовании заболевания (PAX 8, NKX 2-1).
Рекомендовано пациентам с ВГ: Критерии адекватности лечения ВГ: Лечение транзиторного гипотиреоза. Пациентам с диагнозом «транзиторный гипотиреоз» рекомендован следующий алгоритм ведения. При получении показателей ТТГ и Т4св. в пределах референсных значений лечение не возобновляют, контрольные осмотры с определением концентраций ТТГ и Т4св. в сыворотке проводят через 2 нед, 1 и 6 мес после прекращения лечения. Если диагноз ВГ подтверждается, лечение левотироксином натрия продолжают с постоянным контролем за адекватностью терапии . Внимание: если уровень ТТГ на фоне терапии когда-либо повышался вследствие недостаточной дозы левотироксина натрия или нарушения схемы его приема, прерывать лечение для уточнения диагноза не рекомендуется. В этом случае диагноз ВГ не вызывает сомнения (УУР — С ; УДД — 5).
Оперативное лечение при ВГ рекомендуется: пациентам, имеющим зоб, при наличии: пациентам без зоба:
Пациентам с ВГ рекомендовано: Противопоказаний не определено (УУР — С ; УДД — 5).
Диспансерное наблюдение и прогноз. Пациентам с ВГ рекомендуется постоянное комплексное углубленное наблюдение у специалистов разного профиля (врача-эндокринолога, врача-невролога, врача-сурдолога, логопеда, медицинского психолога (нейропсихолога); оценка интеллектуального развития с применением теста Векслера (детский вариант); при наличии когнитивных нарушений, психических расстройств, пороков развития — консультация врача-психиатра, врача-кардиолога и др.) (УУР — С; УДД — 5). Прогноз в отношении нейропсихического развития при ВГ зависит от множества факторов. Исследователи во всех странах сходятся во мнении, что определяющую роль для благоприятного прогноза интеллектуального развития ребенка с ВГ, безусловно, играют сроки начала заместительной терапии левотироксином натрия, хотя ряд авторов указывают, что даже при раннем начале лечения у небольшой части детей те или иные нарушения интеллекта все-таки сохраняются. Крайне важным фактором является адекватность лечения на первом году жизни. Таким образом, за некоторым исключением, все дети с ВГ при раннем и адекватном лечении имеют возможность достичь оптимального интеллектуального развития .
Рекомендуется плановая госпитализации в медицинскую организацию: Не рекомендуется госпитализация в стационар при возможности достижения компенсации в амбулаторных условиях (УУР — С; УДД — 5). Показания к выписке пациента из медицинской организации: Рекомендуется экстренная госпитализация в медицинскую организацию в случае возникновения:
Дополнительная информация отсутствует.
Источники финансирования. Работа выполнена по инициативе авторов без привлечения финансирования. Конфликт интересов. Авторы декларируют отсутствие явных и потенциальных конфликтов интересов, связанных с содержанием и публикацией настоящей статьи. Участие авторов. Все авторы одобрили финальную версию статьи перед публикацией, выразили согласие нести ответственность за все аспекты работы, подразумевающую надлежащее изучение и решение вопросов, связанных с точностью или добросовестностью любой части работы.
ПРИЛОЖЕНИЕ Г1.ШКАЛА АПГАР ДЛЯ ДИАГНОСТИКИ ВРОЖДЕННОГО ГИПОТИРЕОЗА У НОВОРОЖДЕННЫХ Название: Шкала Апгар для диагностики врожденного гипотиреоза у новорожденных.Источник: Дедов И.И., Петеркова В.А. Справочник детского эндокринолога. М.: Литтерра, 2020.. С. 98.Тип: шкала оценки.Назначение: оценка клинических симптомов для диагностики ВГ. ПРИЛОЖЕНИЕ Г2.ТЕСТ ВЕКСЛЕРА (ДЕТСКИЙ ВАРИАНТ) Название на русском языке: Тест Векслера (детский вариант).Оригинальное название Wechsler Intelligence Scale for Children, WISC.Источник (официальный сайт разработчиков): Тест Векслера: диагностика структуры интеллекта (детский вариант): методическое руководство/Ю.И. Филимоненко, В.И. Тимофеев. — Санкт-Петербург: ИМАТОН, 2016. — 106 с. — (ИМАТОН. Профессиональный психологический инструментарий) www.imaton.com .Тип: шкала оценки.Назначение: исследование структуры интеллекта у детей от 5 до 16 лет.
|
Colorectal cancer survivors’ adjustment to permanent colostomy in Switzerland: A qualitative analysis | 39f13e29-067c-49f1-a388-2d7bab941b62 | 11800697 | Surgical Procedures, Operative[mh] | This study aims to explore the difficulties emerging through the transition from illness to survivorship experienced by Swiss colostomized CRC survivors. So far, international studies have shown broadly similar results regarding the difficulties of colostomized CRC survivors after treatment . The situation in Switzerland needs to be examined, as post-treatment data are lacking. Perceived health status in Switzerland is higher than the Organization for Economic Co-operation and Development (OECD) average (4.2% of the Swiss population aged 15 years and over consider themselves in poor or feeble health in 2019, compared with an average of 8.5% in OECD countries) . Switzerland is among the countries with high survival rates after cancer . In this regard, it is essential to point out that studies showed that the context (i.e. socio-economical, cultural, or national) influences the adaptation of individuals during their return to normal functioning or regarding their perceived health status. The contribution of this study will be to explore the situation in a country with an excellent healthcare system. This study will thus contribute to the knowledge base on adaptation to colostomized CRC survivors in countries with different healthcare systems.
Design and procedure We opted for a qualitative research design to investigate the lived transitional experience of permanent colostomized CRC survivors. The topic is often a taboo subject for colostomized CRC survivors. A face-to-face interview remains a rare opportunity to gather relevant data. This methodological approach enables the investigation and validation of the lived experiences of colostomized CRC survivors through an inductive thematic analysis . For this experiential qualitative research design, we employ , ) inductive thematic analysis framework based on a critical realism ontology and a constructionist epistemology . The inductive thematic analysis is particularly well-suited for this research. Its coding approach enables a comprehensive and nuanced understanding of personal experiences within their sociological and cultural contexts. The choice of this methodology reflects our commitment to authentically conveying the diversity of experiences of colostomized CRC survivors. We recognize language as a mediator through which the reality of survivors’ experiences is partially accessible to researchers. It enables a comprehensive understanding of personal experiences within particular social and medical contexts induced by colostomy . We recruited permanent colostomized CRC survivors in the French-speaking part of Switzerland. The study’s inclusion criteria were to be colostomized CRC survivors, be at least 18 years old, and speak French, which is the language of the study material. The ethical committee of the state of Vaud (CER-VD) approved this study (decision number 2018-00676). The Ligue vaudoise contre le cancer financially supported this study. To recruit colostomized CRC survivors, we have contacted the presidents of two support groups of colostomized individuals located in Switzerland: Ilco Vaud and Ilco Neuchâtel-Jura-Jura Bernois . Both were interested in the study. They allowed us to present our study during one of their general meetings. We distributed the study information sheet validated by the ethical committee. Through these meetings, we met around 70 members. Among them, 15 individuals showed their interest. We planned a face-to-face semi-structured interview at the participants’ homes and collected their written consent. Participants did not get financial compensation, but we offered them some chocolate for their time at the end of the interview. Participants Among the 15 interested individuals, one withdrew due to additional medical difficulties, one did not manifest further interest, and four were not eligible for the study. Even though they had an ostomy, it was not related to CRC. Thus, the final sample comprised nine participants (four women and five men). They lived with a permanent colostomy for an average of 16.4 years (SD = 11.8, range = 0.5–35.0, median = 18.0). Their mean age was 73.1 years (SD = 9.5, range = 56.0–88.0, median = 73.0). Seven were engaged in a couple’s relationship. Among them, six lived with their partner. One was widowed. All participants were retired except one, who was unemployed at the time of the interview. Two had a school education, five had an upper secondary education, and two had a higher education. Interview We developed a semi-structured interview guide. It explored four different topics: (i) lived experience of the illness; (ii) relationships with close, everyday relationships and social activities; (iii) coping strategies and available resources; (iv) recommendations and future projects. The first author conducted the interviews in French under the supervision of a senior researcher trained in qualitative methods. They took place between July 2018 and November 2018. They lasted an average of 66.56 minutes (SD = 17.54, range = 38–97, median = 68). All were audio-recorded and then transcribed verbatim to be analyzed. The first author did the transcription. The translations from French to English of the excerpts presented in this article were made by Sandra Vuilleumier. Analysis We analyze the transcripts using a thematic analysis . Our results predominantly rely on inductive reasoning, and we elaborate on their implications in the context of existing literature during the discussion. The first and third authors read the transcripts verbatim several times to familiarize themselves with the data. They then generated codes that they grouped into categories from the verbatims. Codes represent a unique idea or basic information . A unique relationship links the codes belonging to a category. Categories were then regrouped into themes, which organize the data into a central idea or concept and account for a significant amount of the data . As critical realist researchers, we acknowledge our impact on study participants (e.g. what they tell us or who participates). In addition, we are convinced that our subjectivity shapes our comprehension of the analysis and data collection. Our sensitivity leads to different interpretations during data analysis that result from subjectivity, enabling the development of a critical reflection . We limited our subjectivity when undertaking the analysis by discussing the interviews and the analyses. Nevertheless, it heightened our introspection and fostered a more comprehensive perspective. The insights and the experience shared allow us to broaden our thinking. Furthermore, we used a diary after each interview, transcription, and data analysis. It facilitated the documentation of our feelings and ideas, mitigating the influence of biases such as preconceived ideas, thereby enhancing the reliability of our results. Although this approach may marginally diminish reflexivity, the benefits outweigh this concern . The first and third authors were responsible for the data collection and analysis. At the time of the interviews, the first author pursued a master’s degree in health psychology. His professional trajectory included several years of experience in medical care as a healthcare aide in a hospital and treatment and rehabilitation centers. Additionally, he contributed to public health research within a foundation and engaged with several political university authorities. These multifaceted experiences endowed him with a deep understanding of the challenges arising from illness. However, a reticence to broach sexuality may have inadvertently limited the depth of study outcomes. The third author, with over two decades of experience as a nurse in diverse healthcare settings, transitioned to a role as a psychologist specializing in qualitative research in health psychology. Accumulating extensive expertise in health research, from diseases to healthcare education, she guided and supported the first author through the analytical process.
We opted for a qualitative research design to investigate the lived transitional experience of permanent colostomized CRC survivors. The topic is often a taboo subject for colostomized CRC survivors. A face-to-face interview remains a rare opportunity to gather relevant data. This methodological approach enables the investigation and validation of the lived experiences of colostomized CRC survivors through an inductive thematic analysis . For this experiential qualitative research design, we employ , ) inductive thematic analysis framework based on a critical realism ontology and a constructionist epistemology . The inductive thematic analysis is particularly well-suited for this research. Its coding approach enables a comprehensive and nuanced understanding of personal experiences within their sociological and cultural contexts. The choice of this methodology reflects our commitment to authentically conveying the diversity of experiences of colostomized CRC survivors. We recognize language as a mediator through which the reality of survivors’ experiences is partially accessible to researchers. It enables a comprehensive understanding of personal experiences within particular social and medical contexts induced by colostomy . We recruited permanent colostomized CRC survivors in the French-speaking part of Switzerland. The study’s inclusion criteria were to be colostomized CRC survivors, be at least 18 years old, and speak French, which is the language of the study material. The ethical committee of the state of Vaud (CER-VD) approved this study (decision number 2018-00676). The Ligue vaudoise contre le cancer financially supported this study. To recruit colostomized CRC survivors, we have contacted the presidents of two support groups of colostomized individuals located in Switzerland: Ilco Vaud and Ilco Neuchâtel-Jura-Jura Bernois . Both were interested in the study. They allowed us to present our study during one of their general meetings. We distributed the study information sheet validated by the ethical committee. Through these meetings, we met around 70 members. Among them, 15 individuals showed their interest. We planned a face-to-face semi-structured interview at the participants’ homes and collected their written consent. Participants did not get financial compensation, but we offered them some chocolate for their time at the end of the interview.
Among the 15 interested individuals, one withdrew due to additional medical difficulties, one did not manifest further interest, and four were not eligible for the study. Even though they had an ostomy, it was not related to CRC. Thus, the final sample comprised nine participants (four women and five men). They lived with a permanent colostomy for an average of 16.4 years (SD = 11.8, range = 0.5–35.0, median = 18.0). Their mean age was 73.1 years (SD = 9.5, range = 56.0–88.0, median = 73.0). Seven were engaged in a couple’s relationship. Among them, six lived with their partner. One was widowed. All participants were retired except one, who was unemployed at the time of the interview. Two had a school education, five had an upper secondary education, and two had a higher education.
We developed a semi-structured interview guide. It explored four different topics: (i) lived experience of the illness; (ii) relationships with close, everyday relationships and social activities; (iii) coping strategies and available resources; (iv) recommendations and future projects. The first author conducted the interviews in French under the supervision of a senior researcher trained in qualitative methods. They took place between July 2018 and November 2018. They lasted an average of 66.56 minutes (SD = 17.54, range = 38–97, median = 68). All were audio-recorded and then transcribed verbatim to be analyzed. The first author did the transcription. The translations from French to English of the excerpts presented in this article were made by Sandra Vuilleumier.
We analyze the transcripts using a thematic analysis . Our results predominantly rely on inductive reasoning, and we elaborate on their implications in the context of existing literature during the discussion. The first and third authors read the transcripts verbatim several times to familiarize themselves with the data. They then generated codes that they grouped into categories from the verbatims. Codes represent a unique idea or basic information . A unique relationship links the codes belonging to a category. Categories were then regrouped into themes, which organize the data into a central idea or concept and account for a significant amount of the data . As critical realist researchers, we acknowledge our impact on study participants (e.g. what they tell us or who participates). In addition, we are convinced that our subjectivity shapes our comprehension of the analysis and data collection. Our sensitivity leads to different interpretations during data analysis that result from subjectivity, enabling the development of a critical reflection . We limited our subjectivity when undertaking the analysis by discussing the interviews and the analyses. Nevertheless, it heightened our introspection and fostered a more comprehensive perspective. The insights and the experience shared allow us to broaden our thinking. Furthermore, we used a diary after each interview, transcription, and data analysis. It facilitated the documentation of our feelings and ideas, mitigating the influence of biases such as preconceived ideas, thereby enhancing the reliability of our results. Although this approach may marginally diminish reflexivity, the benefits outweigh this concern . The first and third authors were responsible for the data collection and analysis. At the time of the interviews, the first author pursued a master’s degree in health psychology. His professional trajectory included several years of experience in medical care as a healthcare aide in a hospital and treatment and rehabilitation centers. Additionally, he contributed to public health research within a foundation and engaged with several political university authorities. These multifaceted experiences endowed him with a deep understanding of the challenges arising from illness. However, a reticence to broach sexuality may have inadvertently limited the depth of study outcomes. The third author, with over two decades of experience as a nurse in diverse healthcare settings, transitioned to a role as a psychologist specializing in qualitative research in health psychology. Accumulating extensive expertise in health research, from diseases to healthcare education, she guided and supported the first author through the analytical process.
We categorized the lived experience of Swiss colostomized CRC survivors into four themes: (i) the colostomy: a handicap for daily activities; (ii) redefining the self: navigating through identity shifts, self-perception, and psychological support following a colostomy; (iii) relation with medical staff: between trust and understanding needs; and (iv) the eternal dilemma: disclose or not one’s health condition to receive support. In the following section, we describe each theme. The colostomy: A handicap for daily activities Participants perceived colostomy as a handicap that can impede or even stop some activities. Some participants put their disability into perspective; for example, P2 (woman) explained: “Compared to someone who is missing an arm, I can work.” However, others, such as P1 (man), insisted: “it is a major disability. It is not a small disability, it’s very important.” Participants’ self-perception influenced their engagement in activities and vice versa. Some took back control of their life, as P1 (man) who expressed that he “has plans for the future because we think we’re in good shape physically, and we can still do things together.” Some adapted and showed resilience, such as P8 (woman) stating: “You gain confidence, and then it’s okay.” Most of them agree that they have no choice but to adapt. P2 (woman) resigned: “Partly, I told myself this is how it’s going to be. I have no choice now.” However, concerns about pouch leakage or perceived physical limitations led to refraining from engaging in activities they enjoyed, fostering social isolation. P1 (man) reported that “we cannot rely on the pouch.” P9 (woman) resumed it: “We used to go to the beach often to swim. And we stopped that because I said no.” P4 (man) confirmed: “Let’s say that we’re not as free as we used to be.” These thoughts led some to stay at home where they felt safer: “Because, as to going out, we do it less and less because if we go to the movies or to see a play and if it starts to smell bad, it’s a disaster.” (P4, man). All participants experienced at least one pouch leakage. P3 (man) recalled: “It happened to me that I was eating at friends’ houses, and during the meal […], I felt ‘Oh shit!’. There was pressure, it swelled up, it wasn’t stuck right. There was a lot of it in my pants.” Even at home, participants and their relatives suffer from these inconveniences, as P1 (man): “And then at night, well, it’s a disaster. Oh well, that’s unpleasant, for me and for her who does the laundry.” Driven by fear of pouch leakage or smell, participants remained alert concerning their colostomy. They always carry some material to change the pouch to be confident. They exhibited behaviors bordering on obsessive-compulsive disorder. P2 (woman) described her rituals: “I also have some reflex gestures [she puts her hand on the pouch]. I move my hand and tada. I touch it just like this with my hand, and there we go, no worries.” Means of travel influenced the scope of activities that could be carried out. Preferences leaned toward options allowing quick stops in case of pouch leakage. Long journeys required extensive mental and material preparation. P5 (woman) illustrated the challenges: “If I wanted to leave for two or three days, it’s okay. Now, I have to think about what I need to take, if I have enough or not enough.” Colostomized CRC survivors faced challenges related to constipation or diarrhea that impaired their ability to engage in activities, as P5 (woman) attested: “Because of food, I don’t feel free. Maybe sometimes I can eat. But it depends on what I want to do next few nights.” However, for others, food remained one of the few areas where they could fully exercise their autonomy and maintain a sense of personal liberty. P4 (man) remarked: “The food, I don’t care, and I don’t want to care. If I have trouble, I know why. But I don’t want to begin to restrict myself on everything, let’s say.” P7 (man) followed the same reasoning: “I want morels mushroom, well sometimes I eat morels mushroom. But it doesn’t necessarily do any good. And then the morels mushroom come out here.” Such coping strategies appear prevalent among those who mention an experience of social isolation in their interview and may predict a support need. Finally, the colostomy impeded the sexual life. Good communication within the couple mitigated the effects, although significant changes in sexuality were acknowledged. P1 (man) asserted that “sexuality is fucked” and P2 (woman) declared: “What changes in the daily life is having sexual relations; that’s undeniable.” Redefining the self: Navigating through identity shifts, self-perception, and psychological support following a colostomy Having a colostomy requires adaptations and often becomes a source of anxiety. Talking about the colostomy and recalling the associated events caused much stress for some participants: “It embarrasses me, and it reminds me of the whole process. All of it, from the beginning. It’s embarrassing me. Really, it embarrasses me.” (P4, man). Cancer, treatments, and colostomy negatively altered the image of colostomized CRC survivors, as P9 (woman) highlighted: “When I have to look at myself, that’s the worst part. When you have to lower your head and then see, take care of it. It’s not easy.” and P2 (woman) described her discomfort: “When I look at myself in the mirror in the morning, obviously I feel different.” Many felt less self-confident than before the cancer. However, some coped by personifying their colostomy, assigning nicknames such as “my friend” or “Brutus.” By perceiving the colostomy as external, it lessened the threat to their identity. Despite experiencing psychological difficulties, colostomized CRC survivors expressed reluctance to seek the help of psychologists. The support and understanding provided by the ostomy nurses appeared sufficient for them. The few who consulted a psychologist expressed disappointment, as they had hoped for immediate relief. Several mentioned that they employed a relativization strategy by comparing their experiences to those enduring tremendous suffering. However, many colostomized CRC survivors also expressed a focus on the positive aspects of their life, such as P1 (man): “Well, listen, we enjoy life, the emotions we have, we have developed them. We’ve become sensitive to many things.” Aligning with personal values becomes a priority for most, such as P2 (woman): “When you’ve been through what I’ve been through, you want to focus what’s essential. […] To agree with my choices and stop, in a way, enduring ‘professional choices’, things that don’t suit me, actually.” Relations with the medical staff: Between trust and understanding needs The cancer diagnosis was frequently unexpected and shocking and left the participants in a vital emergency, prompting them to take any measures necessary to survive. P2 (woman) confided it: “Obviously, I collapsed there, so, uh.” Furthermore, P9 (woman) said: “I said to myself, that’s it, everything is falling apart, what’s happening?” Some presented sequelae of the diagnosis announcement corresponding to a fear of recurrence, as P4 (man) evidenced: “[The doctor] gave me very a very low survival chance.” Since this event, he feared a recurrence. Others, like P5 (woman), remained apprehensive, stating: “I don’t know if, as the doctor tells me: ‘cancer can be completely cured’, but balanced […] Once we enter there, we are constantly monitored until the end of our days.” While they entrusted their physicians with the responsibility of treating them, many participants were not aware of the nature and implications of a colostomy. P5 (woman) indicated: “No, he didn’t explain to me what an ostomy was. Only that there was a surgery.” Finally, for some, the colostomy symbolized victory over cancer. A few embraced optimism, exemplified by P7 (man) statement: “I said: ‘Ah, I got my pouch. I’m good for 20 years.’” At the hospital, participants acknowledged the value of caregivers. Stoma therapists provided extensive counseling and support after surgery. However, most CRC cancer survivors required more support upon returning home after surgery. P3 (man) explained: “But the ostomy girl [i.e., ostomy nurse], she comes every ten days, but that’s not enough in the beginning.” Despite this favorable perception of healthcare professionals, tensions arose when dealing with non-specialist nurses who lacked knowledge about colostomy. P1 (man) recalled where he felt like a burden: “I want to change the pouch. Could you be there for the material, just in case… And [the nurse] came but half-heartedly, right. She didn’t like it. […] And then, she said to me, well if it’s okay, I won’t come back. She didn’t care at all. It wasn’t her business.” Finally, participants expressed the feeling that there was a disinterest in the medical and nursing staff for some of their comorbidities, such as prolapsus, skin irritation, or blood in the feces. It rose doubts about the skills and knowledge of the caregivers. A few participants lost confidence in medical staff and felt neglected, as P7 (man): “I think they don’t know. They don’t know. We don’t know who to ask to get an answer. To say, this is how it is, that’s all.” The eternal dilemma: Disclose or not one’s health condition to receive support Colostomized CRC survivors faced a dilemma. They might need more support but also want others to consider them normal. Although those around the participants assisted them in many tasks, such as mowing the lawn or moving the vacuum cleaner across floors, and accepted their colostomy, many participants were often afraid to disclose their medical condition even to their relatives. They wore loose-fitting clothes, high-waisted pants, or even a belt to conceal their pouch. More than half of the participants feared to shock others if they allowed them to see, hear, or feel their colostomy. P3 (man) asserted: “Of course, it can bother others. What’s that thing? Uh, he’s got something coming out of his belly. What the hell?” P4 (man) approved: “If it starts to stink, do you believe he’s going to say to me: this one, he’s farting, or what is he doing? He doesn’t know that I have an ostomy, let’s say. […] No, exactly, I can’t do anything about it, but the thing is, he doesn’t know that I have an ostomy.” Therefore, they concealed their condition. It happened that their fears took shape, as P3 (man) experienced it: “She probably smelled it, a little bit of it. And then, she got up and ran away.” Such behavior hurt, offended, and reinforced a negative self-image. Participants felt “put aside, pestiferous.” (P3, man). P5 (woman) mentioned: “It shocks me that he has to react this way.” Contrastingly, a lack of reaction from their surroundings boosted their confidence in their body image. P6 (man) noted that their colleagues simply “don’t care,” providing a positive contrast to his fears of rejection. Overall, participants expressed positive relationships with individuals in their immediate social circle, as P9 (woman) stressed: “And sometimes we laugh about [the pouch]. Depending on the noise, we laugh about it. And there is no embarrassment. We never hide anything from each other. With some friends there is no embarrassment. It goes very well.” Regarding their professional lives, some participants disclosed their health issues to their employers, whereas others concealed them because of apprehension regarding termination. Divulgating their condition allowed them to adjust their workload and receive support. In addition, they felt relieved to appear freely, as P2 (woman) mentioned: “I had written in my email that I was open to all questions, that we could discuss it, and that there were no taboos. That’s it. But it was lucky I did it! Because if I hadn’t, it would have been difficult.” On the opposite, concealing the pouch was distressing. Participants reported that society at large remained mainly unaware of the challenges they have to face and their specific needs. One recurring issue encountered by the participants pertained to the coverage provided by health insurance companies, as they noted that their medical equipment was essential and that they could not forego its use. Participants sought a substantial amount of information, primarily relying on professionals and support groups for assistance. Such groups provided a space to talk without taboos. Inexperienced colostomized individuals benefited from the experience of others. P3 (man) said he could: “give advice, reassure them, tell them that no one kicked the bucket. That we can battle through. That we must keep smiling, that we must go forward. That we must not retreat into the grave, that we must move forward. No, if I can do anything to help, I gladly will. There is no problem.”
Participants perceived colostomy as a handicap that can impede or even stop some activities. Some participants put their disability into perspective; for example, P2 (woman) explained: “Compared to someone who is missing an arm, I can work.” However, others, such as P1 (man), insisted: “it is a major disability. It is not a small disability, it’s very important.” Participants’ self-perception influenced their engagement in activities and vice versa. Some took back control of their life, as P1 (man) who expressed that he “has plans for the future because we think we’re in good shape physically, and we can still do things together.” Some adapted and showed resilience, such as P8 (woman) stating: “You gain confidence, and then it’s okay.” Most of them agree that they have no choice but to adapt. P2 (woman) resigned: “Partly, I told myself this is how it’s going to be. I have no choice now.” However, concerns about pouch leakage or perceived physical limitations led to refraining from engaging in activities they enjoyed, fostering social isolation. P1 (man) reported that “we cannot rely on the pouch.” P9 (woman) resumed it: “We used to go to the beach often to swim. And we stopped that because I said no.” P4 (man) confirmed: “Let’s say that we’re not as free as we used to be.” These thoughts led some to stay at home where they felt safer: “Because, as to going out, we do it less and less because if we go to the movies or to see a play and if it starts to smell bad, it’s a disaster.” (P4, man). All participants experienced at least one pouch leakage. P3 (man) recalled: “It happened to me that I was eating at friends’ houses, and during the meal […], I felt ‘Oh shit!’. There was pressure, it swelled up, it wasn’t stuck right. There was a lot of it in my pants.” Even at home, participants and their relatives suffer from these inconveniences, as P1 (man): “And then at night, well, it’s a disaster. Oh well, that’s unpleasant, for me and for her who does the laundry.” Driven by fear of pouch leakage or smell, participants remained alert concerning their colostomy. They always carry some material to change the pouch to be confident. They exhibited behaviors bordering on obsessive-compulsive disorder. P2 (woman) described her rituals: “I also have some reflex gestures [she puts her hand on the pouch]. I move my hand and tada. I touch it just like this with my hand, and there we go, no worries.” Means of travel influenced the scope of activities that could be carried out. Preferences leaned toward options allowing quick stops in case of pouch leakage. Long journeys required extensive mental and material preparation. P5 (woman) illustrated the challenges: “If I wanted to leave for two or three days, it’s okay. Now, I have to think about what I need to take, if I have enough or not enough.” Colostomized CRC survivors faced challenges related to constipation or diarrhea that impaired their ability to engage in activities, as P5 (woman) attested: “Because of food, I don’t feel free. Maybe sometimes I can eat. But it depends on what I want to do next few nights.” However, for others, food remained one of the few areas where they could fully exercise their autonomy and maintain a sense of personal liberty. P4 (man) remarked: “The food, I don’t care, and I don’t want to care. If I have trouble, I know why. But I don’t want to begin to restrict myself on everything, let’s say.” P7 (man) followed the same reasoning: “I want morels mushroom, well sometimes I eat morels mushroom. But it doesn’t necessarily do any good. And then the morels mushroom come out here.” Such coping strategies appear prevalent among those who mention an experience of social isolation in their interview and may predict a support need. Finally, the colostomy impeded the sexual life. Good communication within the couple mitigated the effects, although significant changes in sexuality were acknowledged. P1 (man) asserted that “sexuality is fucked” and P2 (woman) declared: “What changes in the daily life is having sexual relations; that’s undeniable.”
Having a colostomy requires adaptations and often becomes a source of anxiety. Talking about the colostomy and recalling the associated events caused much stress for some participants: “It embarrasses me, and it reminds me of the whole process. All of it, from the beginning. It’s embarrassing me. Really, it embarrasses me.” (P4, man). Cancer, treatments, and colostomy negatively altered the image of colostomized CRC survivors, as P9 (woman) highlighted: “When I have to look at myself, that’s the worst part. When you have to lower your head and then see, take care of it. It’s not easy.” and P2 (woman) described her discomfort: “When I look at myself in the mirror in the morning, obviously I feel different.” Many felt less self-confident than before the cancer. However, some coped by personifying their colostomy, assigning nicknames such as “my friend” or “Brutus.” By perceiving the colostomy as external, it lessened the threat to their identity. Despite experiencing psychological difficulties, colostomized CRC survivors expressed reluctance to seek the help of psychologists. The support and understanding provided by the ostomy nurses appeared sufficient for them. The few who consulted a psychologist expressed disappointment, as they had hoped for immediate relief. Several mentioned that they employed a relativization strategy by comparing their experiences to those enduring tremendous suffering. However, many colostomized CRC survivors also expressed a focus on the positive aspects of their life, such as P1 (man): “Well, listen, we enjoy life, the emotions we have, we have developed them. We’ve become sensitive to many things.” Aligning with personal values becomes a priority for most, such as P2 (woman): “When you’ve been through what I’ve been through, you want to focus what’s essential. […] To agree with my choices and stop, in a way, enduring ‘professional choices’, things that don’t suit me, actually.”
The cancer diagnosis was frequently unexpected and shocking and left the participants in a vital emergency, prompting them to take any measures necessary to survive. P2 (woman) confided it: “Obviously, I collapsed there, so, uh.” Furthermore, P9 (woman) said: “I said to myself, that’s it, everything is falling apart, what’s happening?” Some presented sequelae of the diagnosis announcement corresponding to a fear of recurrence, as P4 (man) evidenced: “[The doctor] gave me very a very low survival chance.” Since this event, he feared a recurrence. Others, like P5 (woman), remained apprehensive, stating: “I don’t know if, as the doctor tells me: ‘cancer can be completely cured’, but balanced […] Once we enter there, we are constantly monitored until the end of our days.” While they entrusted their physicians with the responsibility of treating them, many participants were not aware of the nature and implications of a colostomy. P5 (woman) indicated: “No, he didn’t explain to me what an ostomy was. Only that there was a surgery.” Finally, for some, the colostomy symbolized victory over cancer. A few embraced optimism, exemplified by P7 (man) statement: “I said: ‘Ah, I got my pouch. I’m good for 20 years.’” At the hospital, participants acknowledged the value of caregivers. Stoma therapists provided extensive counseling and support after surgery. However, most CRC cancer survivors required more support upon returning home after surgery. P3 (man) explained: “But the ostomy girl [i.e., ostomy nurse], she comes every ten days, but that’s not enough in the beginning.” Despite this favorable perception of healthcare professionals, tensions arose when dealing with non-specialist nurses who lacked knowledge about colostomy. P1 (man) recalled where he felt like a burden: “I want to change the pouch. Could you be there for the material, just in case… And [the nurse] came but half-heartedly, right. She didn’t like it. […] And then, she said to me, well if it’s okay, I won’t come back. She didn’t care at all. It wasn’t her business.” Finally, participants expressed the feeling that there was a disinterest in the medical and nursing staff for some of their comorbidities, such as prolapsus, skin irritation, or blood in the feces. It rose doubts about the skills and knowledge of the caregivers. A few participants lost confidence in medical staff and felt neglected, as P7 (man): “I think they don’t know. They don’t know. We don’t know who to ask to get an answer. To say, this is how it is, that’s all.”
Colostomized CRC survivors faced a dilemma. They might need more support but also want others to consider them normal. Although those around the participants assisted them in many tasks, such as mowing the lawn or moving the vacuum cleaner across floors, and accepted their colostomy, many participants were often afraid to disclose their medical condition even to their relatives. They wore loose-fitting clothes, high-waisted pants, or even a belt to conceal their pouch. More than half of the participants feared to shock others if they allowed them to see, hear, or feel their colostomy. P3 (man) asserted: “Of course, it can bother others. What’s that thing? Uh, he’s got something coming out of his belly. What the hell?” P4 (man) approved: “If it starts to stink, do you believe he’s going to say to me: this one, he’s farting, or what is he doing? He doesn’t know that I have an ostomy, let’s say. […] No, exactly, I can’t do anything about it, but the thing is, he doesn’t know that I have an ostomy.” Therefore, they concealed their condition. It happened that their fears took shape, as P3 (man) experienced it: “She probably smelled it, a little bit of it. And then, she got up and ran away.” Such behavior hurt, offended, and reinforced a negative self-image. Participants felt “put aside, pestiferous.” (P3, man). P5 (woman) mentioned: “It shocks me that he has to react this way.” Contrastingly, a lack of reaction from their surroundings boosted their confidence in their body image. P6 (man) noted that their colleagues simply “don’t care,” providing a positive contrast to his fears of rejection. Overall, participants expressed positive relationships with individuals in their immediate social circle, as P9 (woman) stressed: “And sometimes we laugh about [the pouch]. Depending on the noise, we laugh about it. And there is no embarrassment. We never hide anything from each other. With some friends there is no embarrassment. It goes very well.” Regarding their professional lives, some participants disclosed their health issues to their employers, whereas others concealed them because of apprehension regarding termination. Divulgating their condition allowed them to adjust their workload and receive support. In addition, they felt relieved to appear freely, as P2 (woman) mentioned: “I had written in my email that I was open to all questions, that we could discuss it, and that there were no taboos. That’s it. But it was lucky I did it! Because if I hadn’t, it would have been difficult.” On the opposite, concealing the pouch was distressing. Participants reported that society at large remained mainly unaware of the challenges they have to face and their specific needs. One recurring issue encountered by the participants pertained to the coverage provided by health insurance companies, as they noted that their medical equipment was essential and that they could not forego its use. Participants sought a substantial amount of information, primarily relying on professionals and support groups for assistance. Such groups provided a space to talk without taboos. Inexperienced colostomized individuals benefited from the experience of others. P3 (man) said he could: “give advice, reassure them, tell them that no one kicked the bucket. That we can battle through. That we must keep smiling, that we must go forward. That we must not retreat into the grave, that we must move forward. No, if I can do anything to help, I gladly will. There is no problem.”
The descriptive examination of data allowed for a comprehensive understanding of the observed phenomena, shedding light on four themes. We explored the potential links and associations between themes and the literature in the discussion. Our participants face enduring physical, psychological, and social difficulties after treatment completion. These difficulties significantly impact daily life and are not merely a matter of managing the pouch. The following section discusses the key elements that contribute to the state experienced by colostomized CRC survivors. The first theme focuses on the disability associated with colostomy, leading to stopping activities. Our findings match those of regarding the complaints of having a disability, resulting in a diminished sense of agency and negatively influencing engagement in activities . Our results suggest a reciprocal relationship between self-perception and activities. The literature does not mention this relation but describes colostomized CRC cancers as feeling betrayed by their bodies . A sense of betrayal may be associated with a low self-image, potentially leading to stop activity. However, our results confirm that colostomized CRC survivors must adapt to their colostomy. reported similar challenges among participants based in Singapore. What distinguishes our study is the assumption of rituals that could be associated with obsessive-compulsive tendencies in the long run. Colostomized CRC survivors attempt to control the pouch. The control of the pouch could also involve nutrition and, therefore, digestion. Through food, colostomized CRC survivors could influence digestion and thus experience a sense of freedom, constituting a novel perspective. According to the disabling effect of the colostomy and the digestion, our results show that colostomized CRC survivors pay attention to transport modalities. It illustrates the breadth of their challenges within mundane contexts. To our knowledge, it is the first study to highlight this aspect. Concerning sexuality, survivors also perceive colostomy as a handicap. Both our participants and those of and express disruptions in sexuality. While discussion may help manage these difficulties, there is a significant impact. Our results from the first themes enhance our understanding of adaptation colostomy over time. A vicious circle may ensue: a leaking pouch leads survivors to stop the activity; they retire to their home, where they feel safe; when survivors attempt the activity again, they focus on their pouch, and at the slightest doubt, they stop the activity; the conclusion would then be that the activity is not suitable because of their disability. Some authors have employed the concept of liminality to describe the sense of disorientation that cancer survivors experience, as they do not feel that they belong to either healthy population or the group of cancer patients. This concept could be related to the experience of survivors caught in a vicious circle. However, a virtuous circle may arise with survivors who try activities and do not experience leakage from the pouch. They are likely to explore alternative activities and gradually engage in them, leading to increased self-confidence and a greater sense of comfort. Although this hypothesis requires validation, it could explain the various levels of adaptation reported among survivors in studies . The second theme highlights the alteration of the self-image of colostomized CRC survivors. Regardless of the country, survivors feel disfigured and less confident than before the cancer . To mitigate their apprehension, our participants personify their colostomy. This coping method was not previously described in scientific literature. Surprisingly, they felt confident about getting over the situation without formal psychological assistance. The latter contradicts the results of , which indicated that most of their participants wished to receive psychological support. Many of them sought advice on their lifestyle. In our study, ostomy nurses provided this support, which may explain the difference. Nevertheless, it highlights that medical staff should give more information about the potential benefits of seeking assistance from mental health professionals. Such support will likely shorten the time required to change survivors’ perceptions of their condition and facilitate their adjustment to the colostomy in fostering psychosocial resources . Nevertheless, colostomized CRC survivors also express some significant life changes. They focus on the positive aspects of their life and live according to their values. Our results align with the studies of and . It illustrates that colostomized CRC survivors can experience comfort and maintain a good quality of life. The third theme, “links with the medical world,” illustrates the dependence of CRC patients on their doctors to manage their health problems. Colostomized CRC survivors encounter many challenges when they return home after surgery. Our results show that they feel isolated and need more support. found similar results in Singapore. Studies on other cancers acknowledge that survivors feel emptiness after treatment . In addition, the fear of recurrence remains widespread, as demonstrated. Our results suggest that how the diagnosis is announced might increase it. The fourth and last theme highlights the difficulties encountered by colostomized CRC survivors concerning the social domain. The desire for more support conflicts with the need to conceal the colostomy. This dilemma extends to their professional lives because they fear being fired. These findings are consistent with the studies conducted by and . We observed that colostomized CRC survivors who openly discuss their colostomy often receive positive support from others. The adjustment process depends on verbal and non-verbal communication skills . Conversely, concealing the colostomy leads to feeling exposed, different, and sometimes isolated . Colostomized CRC survivors suffer from societal misconceptions regarding their needs and difficulties. It may explain why many participants felt closer to others in remission than those who had never had a health problem, as highlighted by . This phenomenon may exacerbate the isolation experienced by colostomized CRC survivors, potentially leading to distress or suffering. Many turn to support groups for information and social integration, as also reported by . Peers in these groups provide empathetic support and understanding for adjustment difficulties. We anticipate that most of our findings should be transferable among countries characterized by analogous healthcare systems. Novel insights exemplified by manifestations of rituals that could lead to obsessive and compulsive tendencies or the choice of food to experience a sense of freedom may even remain invariant across healthcare systems. However, specific outcomes, notably those associated with the utilization of public transportation networks, should be considered with attention. Congruent results may only be expected in countries featuring comparable extensive transport infrastructures. Other findings, such as the way the diagnosis is conveyed exerts an influence on the fear of cancer recurrence, may be dependent on the healthcare system. This study presents some limitations. Firstly, it is essential to note that six out of the nine participants in our sample have been living with a colostomy for over a decade. This prolonged experience may have induced a habituation effect, potentially diminishing the perceived problematic effects associated with colostomy. Secondly, we recruited our participants from support groups. The population represented by these support groups may look for support, which could introduce a bias in our results . Nevertheless, this effect would offset the first limitation. Finally, we recruited our sample by convenience sampling. Therefore, isolated colostomized CRC survivors may not be represented in the study. Several individuals declined to participate in the study, explaining that they did not feel ready to discuss their experience. It is possible that our results underrepresent the proportion of colostomized CRC survivors who experience profound unease or isolation.
This study aimed to explore the experience of Swiss colostomized CRC survivors through their transition from illness to survivorship. Colostomized CRC survivors experience profound psychological and social challenges. Anxiety and fears of recurrence, often intensified by the announcement of the diagnosis, can lead to obsessive-compulsive disorder. The sense of vulnerability further complicates adaptation to the stoma, perceived as a definitive handicap with implications for daily activities and intimate aspects of life as previously acknowledged in other studies . Positive factors, including a constructive attitude toward the stoma and cancer and resignation, offer some resilience. However, the overall impact on self-perception is substantial, influencing activities, self-image, and relationships. Difficulties with digestion contribute to isolation, particularly in fear of leakage, with eating as a symbolic bastion of freedom for some. Despite the evident psychological difficulties, a noteworthy finding is the reluctance of colostomized CRC survivors in Switzerland to seek professional help. It underscores the importance of targeted prevention and normalization efforts in addressing the unique challenges faced by this population. As highlighted in other studies, a shift in behavior is observed as they strive to align with their values and life goals . The relationship with healthcare providers is nuanced, with stoma therapists receiving accolades. At the same time, some doctors and nurses are perceived as showing minimal interest in survivors’ difficulties, leading to a potential erosion of confidence in medical care. Socially, survivors express a desire for more support while concealing their colostomies due to a pervasive fear of dismissal or rejection. The societal misunderstanding of their needs further exacerbates this challenge.
Living with a colostomy is a very challenging experience, and people must have the opportunity to return to a comfortable life after treatment. The difficulties faced by colostomized CRC survivors can result in isolation and dissatisfaction with their lives. Therefore, it is imperative to prevent such negative post-treatment trajectories. One solution involves providing psychological support from the end of treatment until colostomized CRC survivors regain their sense of well-being. Mental health professionals should emphasize that psychological support may not yield immediate results but is beneficial over time, as suggested. Early and long-term support from health psychologists can enhance the likelihood of successful individual adjustment for colostomized CRC survivors.
|
The Promise of Artificial Intelligence in Digestive Healthcare and the Bioethics Challenges It Presents | bc6e08b1-3829-4c9f-bf88-751d6d26fd90 | 10145124 | Internal Medicine[mh] | Medicine is advancing swiftly into the era of Big Data, particularly through the more widespread use of Electronic Health Records (EHRs) and the digitalization of clinical data, intensifying the demands on informatics solutions in healthcare settings. Like all major advances throughout history, the benefits on offer are associated with new rules of engagement. Some 50 years have passed since what is considered to have been the birth of Artificial Intelligence (AI) at the Dartmouth Summer Research Project . This was an intensive 2-month project that set out to obtain solutions to the problems that are faced when attempting to make a machine that can simulate human intelligence. However, it was not until some years later before the first efforts to design biomedical computing solutions based on AI were seen . These efforts are beginning to bear their fruit, and since the turn of the century, we have witnessed truly significant advances in this field, particularly in terms of medical image analysis . Indeed, a search for publications in the PubMed database using the terms “Artificial Intelligence” and “Gastrointestinal Endoscopy” returned 3 articles in 2017, as opposed to 42 in 2022 and 64 in 2021. While the true impact of these practices is yet to be seen in the clinic, their goals are clear: (i) to offer patients more personalized healthcare; (ii) to achieve greater diagnostic/prognostic accuracy; (iii) to reduce human error in clinical practice; and (iv) to reduce the time demands on clinicians as well as enhancing the efficiency of healthcare services. However, the introduction of these tools raises important bioethical issues. Consequently, and before attempting to reap the benefits that they have to offer, it is important to assess how these advances affect patient–clinician relationships , what impact they will have on medical decision making, and how these potential improvements in diagnostic accuracy and efficiency will affect the different healthcare systems around the world. 1.1. The State-of-the-Art in Gastroenterology A number of medical specialties such as Gastroenterology rely heavily on medical images to establish disease diagnosis and patient prognosis, as well as to monitor disease progression. Moreover, in more recent times, some such imaging techniques have been adapted so that they can potentially deliver therapeutic interventions . The digitalization of medical imaging has paved the way for important advances in this field, including the design of AI solutions to aid image acquisition and analysis . Different endoscopy modalities can be used to visualize and monitor the Gastrointestinal (GI) tract, making this an area in which AI models and applications could play an important future role. Indeed, this is reflected in the attempts to design AI-based tools addressing distinct aspects of these examinations and adapting to the different endoscopy techniques employed in the clinic. Accordingly, the development of such AI tools has been the focus of considerable effort of late, mainly with a view to improving the diagnostic accuracy of GI imaging and streamlining these procedures . The term AI is overarching, yet in the context of medical imaging, it can perhaps be more precisely defined by the machine learning (ML) class of AI applications, algorithms that are specifically used to recognize patterns in complex datasets . “Supervised” or “unsupervised” ML models exist; although, the former is perhaps of more interest in this context as they are better suited to attempts at predicting known outputs (e.g., a specific change in a tissue or organ, the presence of a lesion in the mucosa or debris in the tract, etc.). Multi-layered Convolutional Neural Networks (CNNs) are a specific type of deep learning (DL) model, a modality of ML. Significantly, CNNs excel in the analysis, differentiation and classification of medical images and videos, essentially due to their artificial resemblance to neurobiological processes . As might be expected, there have been significant technical advances in endoscopy over the years. Indeed, two decades have now passed since Capsule Endoscopy (CE: also known as Wireless or Video CE) was shown to be a valid minimally invasive diagnostic tool to visualise the intestine in its entirety, including the small bowel (SB) and colon . CE systems involve the use of three main elements. Firstly, there is the capsule that houses the camera, and now perhaps multiple cameras, as well as a light source, a transmitter and a battery. The second element is a sensor system that is necessary to receive the information transmitted by the capsule and that is connected to a recording system. Finally, there is the software required to display the endoscopy images so they can be examined. All these CE elements have undergone significant improvements since they were initially developed. For example, there have been numerous improvements to the capsules (e.g., in their frame acquisition rates, their angle of vision, the number of cameras, and manoeuvrability), as well as to the software used to visualise and examine the images obtained. One of the benefits of CE is that it offers the possibility of examining less inaccessible regions of the intestine, such as the SB, structures that are difficult to access using standard endoscopy protocols. Consequently, CE can be used to evaluate conditions that are complicated to diagnose clearly, such as chronic GI bleeding, tumours and especially SB tumours; mucosal damage; Crohn’s disease (CD); chronic iron-deficiency anaemia; GI polyposis; or celiac disease . There are also fewer contraindications associated with the use of CE; although, these may include disorders of GI motility, GI tract narrowing/obstruction, dysphagia, large GI diverticula or intestinal fistula. Despite the evolution of these systems over the past two decades, they still face a number of challenges, and these will be the target of future improvements. As indicated, software used to aid in the reading and evaluation of the images acquired by CE has also been developed, on the whole, through efforts to decrease the reading times associated with these tests and the accuracy of the results obtained. The time that trained gastroenterologists must dedicate to the analysis of CE examinations is a particularly critical issue, given the number of images generated (ca. 50,000). As such, considerable effort is required to ensure adequate diagnostic yields, with the high associated costs. Accordingly, the main limitation for CE, and particularly Colon Capsule Endoscopy (CCE), as a first-line procedure for the panendoscopic analysis of the entire GI mucosa, is that it is a relatively time-consuming and laborious diagnostic test that requires some expertise in image analysis. In fact, the diagnostic yield for CE is in part hampered by the monotonous and laborious human CE video analysis, which translates into suboptimal diagnostic accuracy, particularly in terms of sensitivity and negative predictive value (NPV). It must also be considered that alterations may only be evident in a few of the frames extracted from CE examinations, which means there is a significant chance that important lesions might be overlooked . Indeed, the inter- and intra-operator error associated with the reading process is one of the main sources of error in these examinations. As a result, there has been much interest from an early stage in the development of these systems to design software that can be used to automatically detect certain features in the images obtained. For example, there have been attempts to include support vector machines (SVMs) within CE systems, in particular for the detection of blood/hematic traces . In this sense, one of the most interesting recent and future developments in CE is the possible incorporation of AI algorithms to automate the detection, differentiation and stratification of specific features of the GI images obtained . 1.2. Automated Analysis and AI Tools to Examine the GI Tract Several studies have showcased the potential of using CNNs in different areas of digestive endoscopy. For example, when performing such examinations, the preparation and cleanliness of the GI tract are fundamental to ensure the validity of the results obtained. Nevertheless, clearly validated scales to assess this feature of endoscopy examinations are still lacking, which has inspired efforts to design AI tools based on CNN models that can automatically evaluate GI tract cleanliness in these tests . Obviously, and in line with the advances in other areas of medicine, many studies have centred on the design of AI tools capable of detecting lesions on or alterations to the GI mucosa likely to be associated with disease , as well as specific characteristics of these changes. Indeed, the potential to apply these systems in real time could offer important benefits to the clinician, particularly when contemplating conditions that require prompt diagnosis and treatment. Moreover, these systems could potentially be used in combination or in conjunction with other AI tools, such as those designed to assess the quality of preparation, or in attempts to not only identify lesions but to also establish their malignant potential . We must also consider that the implementation of AI tools for healthcare administration is likely to have a direct effect on gastroenterology, as it will on other clinical areas. Thus, in light of the increase in the number of AI applications being generated that may potentially be integrated into standard healthcare, it becomes more urgent to address the bioethical issues that surround their use before they are implemented in clinical practice. In this sense, it is important to note that while existing frameworks could be adjusted to regulate the use of clinical AI applications, their disruptive nature makes it more likely that new ‘purpose-built’ regulatory frameworks and guidelines should be drawn up from which regulations can be defined. Moreover, in this process, it will be important to ensure that the AI innovations they are designed to control are enhanced and not limited by the regulations drawn up.
A number of medical specialties such as Gastroenterology rely heavily on medical images to establish disease diagnosis and patient prognosis, as well as to monitor disease progression. Moreover, in more recent times, some such imaging techniques have been adapted so that they can potentially deliver therapeutic interventions . The digitalization of medical imaging has paved the way for important advances in this field, including the design of AI solutions to aid image acquisition and analysis . Different endoscopy modalities can be used to visualize and monitor the Gastrointestinal (GI) tract, making this an area in which AI models and applications could play an important future role. Indeed, this is reflected in the attempts to design AI-based tools addressing distinct aspects of these examinations and adapting to the different endoscopy techniques employed in the clinic. Accordingly, the development of such AI tools has been the focus of considerable effort of late, mainly with a view to improving the diagnostic accuracy of GI imaging and streamlining these procedures . The term AI is overarching, yet in the context of medical imaging, it can perhaps be more precisely defined by the machine learning (ML) class of AI applications, algorithms that are specifically used to recognize patterns in complex datasets . “Supervised” or “unsupervised” ML models exist; although, the former is perhaps of more interest in this context as they are better suited to attempts at predicting known outputs (e.g., a specific change in a tissue or organ, the presence of a lesion in the mucosa or debris in the tract, etc.). Multi-layered Convolutional Neural Networks (CNNs) are a specific type of deep learning (DL) model, a modality of ML. Significantly, CNNs excel in the analysis, differentiation and classification of medical images and videos, essentially due to their artificial resemblance to neurobiological processes . As might be expected, there have been significant technical advances in endoscopy over the years. Indeed, two decades have now passed since Capsule Endoscopy (CE: also known as Wireless or Video CE) was shown to be a valid minimally invasive diagnostic tool to visualise the intestine in its entirety, including the small bowel (SB) and colon . CE systems involve the use of three main elements. Firstly, there is the capsule that houses the camera, and now perhaps multiple cameras, as well as a light source, a transmitter and a battery. The second element is a sensor system that is necessary to receive the information transmitted by the capsule and that is connected to a recording system. Finally, there is the software required to display the endoscopy images so they can be examined. All these CE elements have undergone significant improvements since they were initially developed. For example, there have been numerous improvements to the capsules (e.g., in their frame acquisition rates, their angle of vision, the number of cameras, and manoeuvrability), as well as to the software used to visualise and examine the images obtained. One of the benefits of CE is that it offers the possibility of examining less inaccessible regions of the intestine, such as the SB, structures that are difficult to access using standard endoscopy protocols. Consequently, CE can be used to evaluate conditions that are complicated to diagnose clearly, such as chronic GI bleeding, tumours and especially SB tumours; mucosal damage; Crohn’s disease (CD); chronic iron-deficiency anaemia; GI polyposis; or celiac disease . There are also fewer contraindications associated with the use of CE; although, these may include disorders of GI motility, GI tract narrowing/obstruction, dysphagia, large GI diverticula or intestinal fistula. Despite the evolution of these systems over the past two decades, they still face a number of challenges, and these will be the target of future improvements. As indicated, software used to aid in the reading and evaluation of the images acquired by CE has also been developed, on the whole, through efforts to decrease the reading times associated with these tests and the accuracy of the results obtained. The time that trained gastroenterologists must dedicate to the analysis of CE examinations is a particularly critical issue, given the number of images generated (ca. 50,000). As such, considerable effort is required to ensure adequate diagnostic yields, with the high associated costs. Accordingly, the main limitation for CE, and particularly Colon Capsule Endoscopy (CCE), as a first-line procedure for the panendoscopic analysis of the entire GI mucosa, is that it is a relatively time-consuming and laborious diagnostic test that requires some expertise in image analysis. In fact, the diagnostic yield for CE is in part hampered by the monotonous and laborious human CE video analysis, which translates into suboptimal diagnostic accuracy, particularly in terms of sensitivity and negative predictive value (NPV). It must also be considered that alterations may only be evident in a few of the frames extracted from CE examinations, which means there is a significant chance that important lesions might be overlooked . Indeed, the inter- and intra-operator error associated with the reading process is one of the main sources of error in these examinations. As a result, there has been much interest from an early stage in the development of these systems to design software that can be used to automatically detect certain features in the images obtained. For example, there have been attempts to include support vector machines (SVMs) within CE systems, in particular for the detection of blood/hematic traces . In this sense, one of the most interesting recent and future developments in CE is the possible incorporation of AI algorithms to automate the detection, differentiation and stratification of specific features of the GI images obtained .
Several studies have showcased the potential of using CNNs in different areas of digestive endoscopy. For example, when performing such examinations, the preparation and cleanliness of the GI tract are fundamental to ensure the validity of the results obtained. Nevertheless, clearly validated scales to assess this feature of endoscopy examinations are still lacking, which has inspired efforts to design AI tools based on CNN models that can automatically evaluate GI tract cleanliness in these tests . Obviously, and in line with the advances in other areas of medicine, many studies have centred on the design of AI tools capable of detecting lesions on or alterations to the GI mucosa likely to be associated with disease , as well as specific characteristics of these changes. Indeed, the potential to apply these systems in real time could offer important benefits to the clinician, particularly when contemplating conditions that require prompt diagnosis and treatment. Moreover, these systems could potentially be used in combination or in conjunction with other AI tools, such as those designed to assess the quality of preparation, or in attempts to not only identify lesions but to also establish their malignant potential . We must also consider that the implementation of AI tools for healthcare administration is likely to have a direct effect on gastroenterology, as it will on other clinical areas. Thus, in light of the increase in the number of AI applications being generated that may potentially be integrated into standard healthcare, it becomes more urgent to address the bioethical issues that surround their use before they are implemented in clinical practice. In this sense, it is important to note that while existing frameworks could be adjusted to regulate the use of clinical AI applications, their disruptive nature makes it more likely that new ‘purpose-built’ regulatory frameworks and guidelines should be drawn up from which regulations can be defined. Moreover, in this process, it will be important to ensure that the AI innovations they are designed to control are enhanced and not limited by the regulations drawn up.
The potential benefits that are provided by any new technology must be weighed up against any risks associated with its introduction. Accordingly, if the AI tools that are developed to be used with CE are to fulfil their potential, they must offer guarantees against significant risks, perhaps the most important of which are related to issues of privacy and data protection, unintentional bias in the data and design of the tools, transferability, explainability and responsibility . In addition, it is clear that this is a disruptive technology that will require regulatory guidelines to be put in place to legislate the appropriate use of these tools, guidelines that are on the whole yet to be established. However, it is clear that the need for such regulation has not escaped the healthcare regulators, and, as in other fields, initiatives have been launched to explore the legal aspects surrounding the use of AI tools in healthcare that will clearly be relevant to digestive medicine as well . 2.1. Privacy and Data Management for AI-Based Tools Ensuring the privacy of medical information is increasingly challenging in the digital age. Not only are electronic data easily reproduced, but they are also vulnerable to remote access and manipulation, with economic incentives intensifying cyberattacks on health-related organisations . Breaches of medical confidentiality can have important consequences for patients. Indeed, they may not only be responsible for the shaming or alienation of patients with certain illnesses, but they could even perhaps limit their employment opportunities or affect their health insurance costs. As medical AI applications become more common, and as more data are collected and used/shared more widely, the threat to privacy increases. The hope is that measures such as de-identification will help maintain privacy and will require this process to be adopted more generally in many areas of life. However, the inconvenience associated with these approaches makes this unlikely to occur. Moreover, re-identification of de-identified data is surprisingly easy , and thus, we must perhaps accept that introducing clinical AI applications will compromise our privacy a little. This would be more acceptable if all individuals had the same chance of benefitting from these tools, in the absence of any bias, but at present, this does not appear to be the case (see below). While some progress in personal data protection has been made (e.g., General Data Protection Regulation 2016/79 in the E.U. or the Health Insurance Portability and Accountability Act in the USA: ), further advances with stakeholders are required to specifically address the data privacy issues associated with the deployment of AI applications . The main aim of novel healthcare interventions and technologies is to reduce morbidity and mortality, or to achieve similar health outcomes more efficiently or economically. The evidence favouring the implementation of AI systems in healthcare generally focuses on their relative accuracy compared to gold standards , and as such, there have been fewer clinical trials carried out that measure their effects on outcomes . This emphasis on accuracy may potentially lead to overdiagnosis ; although, this is a phenomenon that may be compensated for by considering other pathological, genomic and clinical data. Hence, it may be necessary to use more extended personal data from EHRs in AI applications to ensure the benefits of the tools are fully reaped and that they do not mislead physicians. One of the advantages of using such algorithms is that they might identify patterns and characteristics that are difficult for the human observer to perceive, and even those that may not currently be included in epidemiological studies, further enhancing diagnostic precision. However, this situation will create important demands on data management, on the safe and secure use of personal information and regarding consent for its use, accentuated by the large amount of quality data required to train and validate DL tools. Traditional opt-in/opt-out models of consent will be difficult to implement on the scale of these data and in such a dynamic environment . Thus, addressing data-related issues will be fundamental to ensure a problem-free incorporation of AI tools into healthcare , perhaps requiring novel approaches to data protection. One possible solution to the question of privacy and data management may come through the emergence of blockchain technologies in healthcare environments. In this sense, recent initiatives into the use of blockchain technology in healthcare may offer possible solutions to some of the problems regarding data handling and management, not least as this technology will facilitate the safer, traceable and efficient handling of an individual’s clinical information . Indeed, the uniqueness of blockchain technology resides in the fact that it permits a massive, secure and decentralized public store of ordered records or events to be established . Indeed, the local storage of medical information is a barrier to sharing this information, as well as potentially compromising its security. Blockchain technology enables data to be carefully protected and safely stored, assuring their immutability . Thus, blockchain technology could help overcome the current fragmentation of a patient’s medical records, potentially benefitting the patient and healthcare professionals alike. Indeed, it could promote communication between healthcare professionals both at the same and perhaps at a different centre, radically reducing the costs associated with sharing medical data . AI applications can benefit from different features of the use of a blockchain, offering trustworthiness, enhanced privacy and traceability. Indeed, when the data used in AI applications (both for training and in general) are acquired from a reliable, secure and trusted platform, AI algorithms will perform better. 2.2. The Issue of Bias in AI Applications Among the most important issues faced by AI applications are those of bias and transferability . Bias may be introduced through the training data employed or by decisions that are made during the design process . In essence, ML systems are shaped by the data on which they are trained and validated, identifying patterns in large datasets that reproduce desired outcomes. Indeed, AI systems are tailor-made, and as such, they are only as good as the data with which they are trained. As such, when these data are incomplete, unrepresentative or poorly interpreted, the end result can be catastrophic . One specific type of bias, spectrum bias, occurs when a diagnostic test is studied in individuals who differ from the population for which the test was intended. Indeed, spectrum bias has been recognized as a potential pitfall for AI applications in capsule endoscopy (CE) , as well as in the field of cardiovascular medicine . Hence, AI learning models might not always be fully valid and applicable to new datasets. In this context, the integration of blockchain-enabled data from other healthcare platforms could serve to augment the number of what would otherwise be underrepresented cases in a dataset, thereby improving the training of the AI application and ultimately, its successful implementation. In real life, any inherent bias in clinical tools cannot be ignored and must be considered before validating AI applications. As a result, overfitting of these models should not be ignored, a phenomenon that occurs when the model is too tightly tuned to the training data, and as a result, it does not function correctly when fed with other data . This can be avoided by using larger datasets for training and by not training the applications excessively, and possibly also by simplifying the models themselves. The way outcomes are identified is also entirely dependent on the data the models are fed. Indeed, there are examples of different pathologies where certain physical characteristics achieve better diagnostic performance, such as lighter rather than darker skin, yet perhaps this is a population that is overrepresented in the training data. Consequently, it is possible that only those with fair skin will fully benefit from such tools . Human decisions may also skew AI tools, such that they may act in discriminatory ways . Disadvantaged groups may not be well-represented in the formative stages of evidence-based medicine , and unless rectified, and human interventions can combat this bias, it will almost certainly be carried over into AI tools. Hence, programmes will need to be established to ensure ethical AI development, such as those contemplated to detect and eliminate bias in data and algorithms . While bias may emerge from poor data collection and evaluation, it can also emerge in systems trained on high-quality datasets. Aggregation bias can emerge from using a single population to design a model that is not optimal for another group . Thus, the potential that bias exists must be faced and not ignored, searching for solutions to overcome this problem rather than rejecting the implementation of AI tools on this basis ( and ). In association with bias, transferability to other settings is a related and significant issue for AI tools . An algorithm trained and tested in one environment will not necessarily perform as well in another environment, and it may need to be retrained on data from the new environment. Even so, transferability is not ensured, and hence, AI tools must be carefully designed, tested and evaluated in each new context prior to their use with patients . This issue also implies there must be significant transparency about the data sources used in the design and development of these systems, with the ensuing demands on data protection and safety. 2.3. The Explainability, Responsibility and the Role of the Clinician in the Era of AI-Based Medicine Another critical issue with regards to the application of DL algorithms is that of explainability and interpretability . When explainable, what an algorithm does and the value it encodes can be readily understood . However, it appears that less explainable algorithms may be more accurate , and thus, it remains unclear if it is possible to achieve both these features at the same time. How algorithms achieve a particular classification or recommendation may even be unclear to some extent to designers and users alike, not least due to the influence of training on the output of the algorithms and that of user interactions. Indeed, in situations where algorithms are being used to address relatively complex medical situations and relationships, this can lead to what is referred to as “black-box medicine”: circumstances in which the basis for clinical decision making becomes less clear . While the explanations a clinician may give for their decisions may not be perfect, they are responsible for these decisions and can usually offer a coherent explanation if necessary. Thus, should AI tools be allowed to make diagnostic, prognostic and management decisions that cannot be explained by a physician ? Some lack of explainability has been widely accepted in modern medicine, with clinicians prescribing aspirin as an analgesic without understanding its mechanism of action for nearly a century . Moreover, it still remains unclear why Lithium acts as a mood stabilizer . If drugs can be prescribed without understanding how they work, then can we not use AI without fully understanding how it reaches a decision? Yet as we move towards greater patient inclusion in their healthcare decisions, the inability of a clinician to fully explain decisions based on AI may become more problematic. Hence, perhaps we are right to seek systems that allow us to trace how conclusions are reached. Moreover, only through some degree of knowledge of AI can physicians be aware of what these tools can actually achieve and when they may be performing irregularly. AI is commonly considered to be of neutral value, neither intrinsically good nor bad, yet it is capable of producing good and bad outcomes. AI algorithms explicitly or implicitly encode values as part of their design , and these values inevitably influence patient outcomes. For example, algorithms will often be designed to prioritise a false-negative rather than false-positive identification, or to perform distinctly depending on the quality of the preparation. While the performance of AI systems would represent a limiting factor for diagnostic success, additional factors will also influence their accuracy and sensitivity, such as the data on which they are trained, how the data are used by the algorithm, and any conscious or unconscious biases that may be introduced. Indeed, the digitalisation of medicine has been said to have shifted the physician’s attention away from the body towards the patient’s data , and the introduction of AI tools runs the risk of further exacerbating this movement. Introducing AI tools into medicine also has implications for the allocation of responsibility regarding treatment decisions and any adverse outcomes based on the use of such tools, as discussed in greater depth elsewhere . At present, there appears to be a void regarding legal responsibility if the use of AI applications produces harm , and there are difficulties in clearly establishing the autonomy and agency of AI . Should any adverse event occur, it is necessary to establish if any party failed in their duty or if errors occurred, attributing responsibility accordingly. Responsibility for the use of the AI will usually be shared between the physician and institution where the treatment was provided, but what of the designers? Responsibility for acting on the basis of the output of the AI will rest with the physician, yet perhaps no party has acted improperly or the AI tool behaved in an unanticipated manner. Indeed, if the machine performs its tasks reliably, there may be no wrongdoing even when it fails. The points in an algorithm at which decisions are made may be complicated to define, and thus, clinicians may be asked to take responsibility for decisions they have not made when using a system that incorporates AI. Importantly, this uncertainty regarding responsibility may influence the trust of a patient in their clinician . Accordingly, the more that clinicians and patients rely upon clinical AI systems, the more that trust may shift away from clinicians toward the AI tools themselves . In relation to the above, the implementation of AI tools may also raise concerns about the role of clinicians. While there are fears that they will be ‘replaced’ by AI tools , the ideal situation would be to take advantage of the strengths of both humans and machines. AI applications could help to compensate for shortages in personnel , they could free up more of a clinicians’ time, enabling them to dedicate this time to their patients or other tasks , or they might enhance the clinician’s capacity in terms of the number of patients they could treat. While decision making in conjunction with AI should involve clinicians, the issue of machine–human disagreement must be addressed . Alternatively, should we be looking for opportunities to introduce fully automated clinical AI solutions? For example, could negative results following AI-based assessment of GI examinations be communicated directly to the patient? While this might be more efficient, it brings into question the individual’s relationship with the clinician. Indeed, the dehumanisation of healthcare may have a detrimental rather than a beneficial effect given the therapeutic value of human contact, attention and empathy . While clinicians may have more time to dedicate to their patients as more automated systems are incorporated into their workflow, they may be less capable to explain AI-based healthcare decision making . Moreover, continued use of AI tools could deteriorate a clinician’s skills, a phenomenon referred to as “de-skilling” , such as their capacity to interpret endoscopy images or to identify less obvious alterations. Conversely, automating workflows may expose clinicians to more images, honing their skills by greater exposure to clinically relevant images, yet maybe at the cost of seeing fewer normal images. In addition, more extended use of automated algorithms may lead to a propensity to accept automated decisions even when they are wrong , with a negative effect on the clinician’s diagnostic precision. Thus, efforts must be made to ensure that the clinician’s professional capacity remains fine-tuned to avoid generating a dependence on automated systems and to avoid any potential loss of skills (e.g., in performing and interpreting endoscopies) when physicians are no longer required to use (the phenomenon of de-skilling has also been dealt with in more detail elsewhere ). Other issues have been raised in association with the clinical introduction of AI applications, such as whether they will lead to greater surveillance of populations and how this should be controlled. Surveillance might compromise privacy but it could also be beneficial, enhancing the data with which the DL applications are trained, so perhaps this is an issue that will be necessary to contemplate in regulatory guidelines. Another issue that also needs to be addressed is the extent to which non-medical specialists such as computer scientists and IT specialists will gain power in clinical settings. Finally, the fragility associated with reliance on AI systems and the potential that monopolies will be established in specific areas of healthcare will also have to be considered . In summary, it will be important to respect a series of criteria when designing and implementing AI-based clinical solutions to ensure that they are trustworthy .
Ensuring the privacy of medical information is increasingly challenging in the digital age. Not only are electronic data easily reproduced, but they are also vulnerable to remote access and manipulation, with economic incentives intensifying cyberattacks on health-related organisations . Breaches of medical confidentiality can have important consequences for patients. Indeed, they may not only be responsible for the shaming or alienation of patients with certain illnesses, but they could even perhaps limit their employment opportunities or affect their health insurance costs. As medical AI applications become more common, and as more data are collected and used/shared more widely, the threat to privacy increases. The hope is that measures such as de-identification will help maintain privacy and will require this process to be adopted more generally in many areas of life. However, the inconvenience associated with these approaches makes this unlikely to occur. Moreover, re-identification of de-identified data is surprisingly easy , and thus, we must perhaps accept that introducing clinical AI applications will compromise our privacy a little. This would be more acceptable if all individuals had the same chance of benefitting from these tools, in the absence of any bias, but at present, this does not appear to be the case (see below). While some progress in personal data protection has been made (e.g., General Data Protection Regulation 2016/79 in the E.U. or the Health Insurance Portability and Accountability Act in the USA: ), further advances with stakeholders are required to specifically address the data privacy issues associated with the deployment of AI applications . The main aim of novel healthcare interventions and technologies is to reduce morbidity and mortality, or to achieve similar health outcomes more efficiently or economically. The evidence favouring the implementation of AI systems in healthcare generally focuses on their relative accuracy compared to gold standards , and as such, there have been fewer clinical trials carried out that measure their effects on outcomes . This emphasis on accuracy may potentially lead to overdiagnosis ; although, this is a phenomenon that may be compensated for by considering other pathological, genomic and clinical data. Hence, it may be necessary to use more extended personal data from EHRs in AI applications to ensure the benefits of the tools are fully reaped and that they do not mislead physicians. One of the advantages of using such algorithms is that they might identify patterns and characteristics that are difficult for the human observer to perceive, and even those that may not currently be included in epidemiological studies, further enhancing diagnostic precision. However, this situation will create important demands on data management, on the safe and secure use of personal information and regarding consent for its use, accentuated by the large amount of quality data required to train and validate DL tools. Traditional opt-in/opt-out models of consent will be difficult to implement on the scale of these data and in such a dynamic environment . Thus, addressing data-related issues will be fundamental to ensure a problem-free incorporation of AI tools into healthcare , perhaps requiring novel approaches to data protection. One possible solution to the question of privacy and data management may come through the emergence of blockchain technologies in healthcare environments. In this sense, recent initiatives into the use of blockchain technology in healthcare may offer possible solutions to some of the problems regarding data handling and management, not least as this technology will facilitate the safer, traceable and efficient handling of an individual’s clinical information . Indeed, the uniqueness of blockchain technology resides in the fact that it permits a massive, secure and decentralized public store of ordered records or events to be established . Indeed, the local storage of medical information is a barrier to sharing this information, as well as potentially compromising its security. Blockchain technology enables data to be carefully protected and safely stored, assuring their immutability . Thus, blockchain technology could help overcome the current fragmentation of a patient’s medical records, potentially benefitting the patient and healthcare professionals alike. Indeed, it could promote communication between healthcare professionals both at the same and perhaps at a different centre, radically reducing the costs associated with sharing medical data . AI applications can benefit from different features of the use of a blockchain, offering trustworthiness, enhanced privacy and traceability. Indeed, when the data used in AI applications (both for training and in general) are acquired from a reliable, secure and trusted platform, AI algorithms will perform better.
Among the most important issues faced by AI applications are those of bias and transferability . Bias may be introduced through the training data employed or by decisions that are made during the design process . In essence, ML systems are shaped by the data on which they are trained and validated, identifying patterns in large datasets that reproduce desired outcomes. Indeed, AI systems are tailor-made, and as such, they are only as good as the data with which they are trained. As such, when these data are incomplete, unrepresentative or poorly interpreted, the end result can be catastrophic . One specific type of bias, spectrum bias, occurs when a diagnostic test is studied in individuals who differ from the population for which the test was intended. Indeed, spectrum bias has been recognized as a potential pitfall for AI applications in capsule endoscopy (CE) , as well as in the field of cardiovascular medicine . Hence, AI learning models might not always be fully valid and applicable to new datasets. In this context, the integration of blockchain-enabled data from other healthcare platforms could serve to augment the number of what would otherwise be underrepresented cases in a dataset, thereby improving the training of the AI application and ultimately, its successful implementation. In real life, any inherent bias in clinical tools cannot be ignored and must be considered before validating AI applications. As a result, overfitting of these models should not be ignored, a phenomenon that occurs when the model is too tightly tuned to the training data, and as a result, it does not function correctly when fed with other data . This can be avoided by using larger datasets for training and by not training the applications excessively, and possibly also by simplifying the models themselves. The way outcomes are identified is also entirely dependent on the data the models are fed. Indeed, there are examples of different pathologies where certain physical characteristics achieve better diagnostic performance, such as lighter rather than darker skin, yet perhaps this is a population that is overrepresented in the training data. Consequently, it is possible that only those with fair skin will fully benefit from such tools . Human decisions may also skew AI tools, such that they may act in discriminatory ways . Disadvantaged groups may not be well-represented in the formative stages of evidence-based medicine , and unless rectified, and human interventions can combat this bias, it will almost certainly be carried over into AI tools. Hence, programmes will need to be established to ensure ethical AI development, such as those contemplated to detect and eliminate bias in data and algorithms . While bias may emerge from poor data collection and evaluation, it can also emerge in systems trained on high-quality datasets. Aggregation bias can emerge from using a single population to design a model that is not optimal for another group . Thus, the potential that bias exists must be faced and not ignored, searching for solutions to overcome this problem rather than rejecting the implementation of AI tools on this basis ( and ). In association with bias, transferability to other settings is a related and significant issue for AI tools . An algorithm trained and tested in one environment will not necessarily perform as well in another environment, and it may need to be retrained on data from the new environment. Even so, transferability is not ensured, and hence, AI tools must be carefully designed, tested and evaluated in each new context prior to their use with patients . This issue also implies there must be significant transparency about the data sources used in the design and development of these systems, with the ensuing demands on data protection and safety.
Another critical issue with regards to the application of DL algorithms is that of explainability and interpretability . When explainable, what an algorithm does and the value it encodes can be readily understood . However, it appears that less explainable algorithms may be more accurate , and thus, it remains unclear if it is possible to achieve both these features at the same time. How algorithms achieve a particular classification or recommendation may even be unclear to some extent to designers and users alike, not least due to the influence of training on the output of the algorithms and that of user interactions. Indeed, in situations where algorithms are being used to address relatively complex medical situations and relationships, this can lead to what is referred to as “black-box medicine”: circumstances in which the basis for clinical decision making becomes less clear . While the explanations a clinician may give for their decisions may not be perfect, they are responsible for these decisions and can usually offer a coherent explanation if necessary. Thus, should AI tools be allowed to make diagnostic, prognostic and management decisions that cannot be explained by a physician ? Some lack of explainability has been widely accepted in modern medicine, with clinicians prescribing aspirin as an analgesic without understanding its mechanism of action for nearly a century . Moreover, it still remains unclear why Lithium acts as a mood stabilizer . If drugs can be prescribed without understanding how they work, then can we not use AI without fully understanding how it reaches a decision? Yet as we move towards greater patient inclusion in their healthcare decisions, the inability of a clinician to fully explain decisions based on AI may become more problematic. Hence, perhaps we are right to seek systems that allow us to trace how conclusions are reached. Moreover, only through some degree of knowledge of AI can physicians be aware of what these tools can actually achieve and when they may be performing irregularly. AI is commonly considered to be of neutral value, neither intrinsically good nor bad, yet it is capable of producing good and bad outcomes. AI algorithms explicitly or implicitly encode values as part of their design , and these values inevitably influence patient outcomes. For example, algorithms will often be designed to prioritise a false-negative rather than false-positive identification, or to perform distinctly depending on the quality of the preparation. While the performance of AI systems would represent a limiting factor for diagnostic success, additional factors will also influence their accuracy and sensitivity, such as the data on which they are trained, how the data are used by the algorithm, and any conscious or unconscious biases that may be introduced. Indeed, the digitalisation of medicine has been said to have shifted the physician’s attention away from the body towards the patient’s data , and the introduction of AI tools runs the risk of further exacerbating this movement. Introducing AI tools into medicine also has implications for the allocation of responsibility regarding treatment decisions and any adverse outcomes based on the use of such tools, as discussed in greater depth elsewhere . At present, there appears to be a void regarding legal responsibility if the use of AI applications produces harm , and there are difficulties in clearly establishing the autonomy and agency of AI . Should any adverse event occur, it is necessary to establish if any party failed in their duty or if errors occurred, attributing responsibility accordingly. Responsibility for the use of the AI will usually be shared between the physician and institution where the treatment was provided, but what of the designers? Responsibility for acting on the basis of the output of the AI will rest with the physician, yet perhaps no party has acted improperly or the AI tool behaved in an unanticipated manner. Indeed, if the machine performs its tasks reliably, there may be no wrongdoing even when it fails. The points in an algorithm at which decisions are made may be complicated to define, and thus, clinicians may be asked to take responsibility for decisions they have not made when using a system that incorporates AI. Importantly, this uncertainty regarding responsibility may influence the trust of a patient in their clinician . Accordingly, the more that clinicians and patients rely upon clinical AI systems, the more that trust may shift away from clinicians toward the AI tools themselves . In relation to the above, the implementation of AI tools may also raise concerns about the role of clinicians. While there are fears that they will be ‘replaced’ by AI tools , the ideal situation would be to take advantage of the strengths of both humans and machines. AI applications could help to compensate for shortages in personnel , they could free up more of a clinicians’ time, enabling them to dedicate this time to their patients or other tasks , or they might enhance the clinician’s capacity in terms of the number of patients they could treat. While decision making in conjunction with AI should involve clinicians, the issue of machine–human disagreement must be addressed . Alternatively, should we be looking for opportunities to introduce fully automated clinical AI solutions? For example, could negative results following AI-based assessment of GI examinations be communicated directly to the patient? While this might be more efficient, it brings into question the individual’s relationship with the clinician. Indeed, the dehumanisation of healthcare may have a detrimental rather than a beneficial effect given the therapeutic value of human contact, attention and empathy . While clinicians may have more time to dedicate to their patients as more automated systems are incorporated into their workflow, they may be less capable to explain AI-based healthcare decision making . Moreover, continued use of AI tools could deteriorate a clinician’s skills, a phenomenon referred to as “de-skilling” , such as their capacity to interpret endoscopy images or to identify less obvious alterations. Conversely, automating workflows may expose clinicians to more images, honing their skills by greater exposure to clinically relevant images, yet maybe at the cost of seeing fewer normal images. In addition, more extended use of automated algorithms may lead to a propensity to accept automated decisions even when they are wrong , with a negative effect on the clinician’s diagnostic precision. Thus, efforts must be made to ensure that the clinician’s professional capacity remains fine-tuned to avoid generating a dependence on automated systems and to avoid any potential loss of skills (e.g., in performing and interpreting endoscopies) when physicians are no longer required to use (the phenomenon of de-skilling has also been dealt with in more detail elsewhere ). Other issues have been raised in association with the clinical introduction of AI applications, such as whether they will lead to greater surveillance of populations and how this should be controlled. Surveillance might compromise privacy but it could also be beneficial, enhancing the data with which the DL applications are trained, so perhaps this is an issue that will be necessary to contemplate in regulatory guidelines. Another issue that also needs to be addressed is the extent to which non-medical specialists such as computer scientists and IT specialists will gain power in clinical settings. Finally, the fragility associated with reliance on AI systems and the potential that monopolies will be established in specific areas of healthcare will also have to be considered . In summary, it will be important to respect a series of criteria when designing and implementing AI-based clinical solutions to ensure that they are trustworthy .
We are clearly at an interesting moment in the history of medicine as we embrace the use of AI and big data as a further step in the era of medical digitalisation. Despite the many challenges that must be faced, this is clearly going to be a disruptive technology in many medical fields, affecting clinical decision making and the doctor–patient dynamic in what will almost certainly be a tremendously positive way. Different levels of automation can be achieved by introducing AI tools into clinical decision-making routines, selecting between fully automated procedures and aids to conventional protocols as specific situations demand. Some issues that must be addressed prior to the clinical implementation of AI tools have already been recognised in healthcare scenarios. For example, bias is an existing problem evident through inequalities in the care received by some populations. AI applications can be used to incorporate and examine large amounts of data, allowing inequalities to be identified and leveraging this technology to address these problems. Through training on different populations, it may be possible to identify specific features of these populations that have an influence on disease prevalence, and/or on its progression and prognosis. Indeed, the identification of population-specific features that are associated with disease will undoubtedly have an important impact on medical research. However, there are other challenges that are posed by these systems that have not been faced previously and that will have to be resolved prior to their widespread incorporation into clinical decision decision-making procedures . Automating procedures is commonly considered to be associated with greater efficiency, reduced costs and savings in time. The growing use of CE in digestive healthcare and the adaptation of these systems to an increasing number of circumstances generates a large amount of information and each examination may require over an hour to analyse. This not only requires the dedication of a clinician or specialist, and their training, but it may increase the chance of errors due to tiredness or monotony (not least as lesions may only be present in a small number of the tens of thousands of images obtained ). DL tools have been developed based on CNNs to be used in conjunction with different CE techniques that aim to detect lesions or abnormalities in the intestinal mucosa . These algorithms are capable of reducing the time required to read these examinations to a question of minutes (depending on the computational infrastructures available). Moreover, they have been shown to be capable of achieving accuracies and results not dissimilar to the current gold standard (expert clinician visual analysis), performances that will most likely improve with time and use. In addition, some of these tools will clearly be able to be used in real time, with the advantages that this will offer to clinicians and patients alike . As well as the savings in time and effort that can be achieved by implementing AI tools, these advances may to some extent also drive the democratization of medicine and help in the application of specialist tools in less well-developed areas. Consequently, the use of AI solutions might reduce the need for specialist training to be able to offer healthcare services in environments that may be more poorly equipped. This may represent an important complement to systems such as CE that involve the use of more portable apparatus capable of being used in areas with more limited access and where patients may not necessarily have access to major medical facilities. Indeed, it may even be possible to use CE in the patient’s home environment. It should also be noted that enhancing the capacity to review and evaluate large numbers of images in a significantly shorter period of time may also offer important benefits in the field of clinical research. Drug discovery programmes and research into other clinical applications are notoriously slow and laborious. Thus, any tools that can help speed up the testing and screening capacities in research pipelines may have important consequences in the development of novel treatments. Moreover, when performing multicentre trials, the variation in the protocols implemented is often an additional and undesired variable. Hence, medical research and clinical trials in particular will benefit from the use of more standardized and less subjective tools. Accordingly, offering researchers the ability to access large amounts of data that have been collected in a uniform manner, even when obtained from different sites, and making it possible to perform medical examinations more swiftly, can only benefit clinical research studies and trials.
In terms of the introduction of AI applications into clinical pipelines, we consider the future to be one of great promise. While it is clear that it will not be seamless and it will require the coordinated effort of many stakeholders, the pot of gold that awaits at the end of the rainbow seems to be getting ever bigger. These applications raise important bioethical issues, not least those related to privacy, data protection, data bias, explainability and responsibility. Consequently, the design and implementation of these tools will need to respect specific criteria to ensure that they are trustworthy . Since these are tools that are breaking new ground, the solutions to these issues may also need to be defined ad hoc, adopting novel procedures. This is an issue that cannot be overlooked as it may be critical to ensure that the opportunities offered by this technology do not slip through our hands.
|
Clinical value analysis of integrated care model in reducing bleeding and complications during Da Vinci robot-assisted urology | e69e26c9-bd05-4d2a-a51f-989babb259dd | 11789911 | Robotic Surgical Procedures[mh] | Urology involves a variety of diseases, including tumors, stones, infections, deformities, and other different types of diseases. Many urological diseases need to be treated by surgery, and the surgical methods are constantly evolving. Minimally invasive technologies such as laparoscopy and endoscopy are widely used, which play an important role in reducing trauma and speeding up recovery. However, urological surgery may have a variety of complications, especially bleeding at the surgical site, which may lead to hematoma and even require reoperation to stop bleeding in severe cases. Such as wound infection, urinary system infection can cause fever and pain symptoms. Traditional open surgery in urology has been gradually eliminated because of its defects, such as large wounds, low tolerance, and slow recovery. Laparoscopic surgery, percutaneous nephroscopic surgery, and ureteroscopic surgery have become routine methods of urology, greatly reducing surgical trauma and postoperative recovery time. Laparoscopic surgery and the Da Vinci robotic surgical system are common alternatives. The Da Vinci robotic surgical system was introduced into China at the beginning of this century. It has the technical characteristics of 3-dimensional vision, prevention and control of physiological tremor, and accurate operation with mechanical replacement of the human hand and is now widely used in the field of surgery. In the field of urology, Da Vinci robot-assisted surgery has been applied more and more in radical resection of prostate cancer, radical resection of bladder cancer, and partial nephrectomy. The integrated nursing model is a new nursing service intervention method. In the traditional nursing model, doctors often determine the treatment and nursing plan, and nurses follow the doctor’s advice. The integrated medical care model advocates that patients should be placed in the core position, and the medical team should jointly participate in various tasks such as clinical diagnosis, disease treatment, and rehabilitation nursing of patients, promote the integration of medical care, and ensure the consistency of medical treatment and nursing operations, so as to improve the quality of nursing services to the greatest extent and help patients achieve better rehabilitation. The integrated medical and nursing work mode is one in which medical staff respect each other and cooperate equally, which is more active, healthy, and efficient. The medical team and nurses work together to provide nursing assistance to patients and actively help them change their disease conditions, thereby promoting their physical and mental health recovery. The purpose of this paper is to explore whether the implementation of the integrated care model for patients in the process of Da Vinci robot-assisted urology surgery can reduce intraoperative bleeding and postoperative complications, reflecting the clinical value of the integrated care model.
2.1. Patient information This study was approved by the Ethics Committee of Wuhan Hospital of Integrated Traditional Chinese and Western Medicine. By retrospective analysis, 93 patients who underwent Da Vinci robot-assisted urology surgery in our hospital from August 2022 to March 2024 were selected as the study objects and divided into the intervention group and the control group according to whether the intervention mode of medical integration was carried out. Among them, there were 43 patients in the intervention group and 50 patients in the control group who received routine care. There were no significant differences in age, sex, and course of disease between the 2 groups ( P > .05). Inclusion criteria were as follows: the patients underwent Da Vinci robot-assisted urology surgery; age > 18 years old; the patient did not receive radiotherapy, chemotherapy, immunization, or targeted therapy before surgery; the patient had no distant metastasis; and complete clinical data of patients. Exclusion criteria were as follows: patients with other chronic diseases; patients with severe coagulation dysfunction; patients with other cancers; and the patient’s clinical data were incomplete. The clinical trial protocol was approved by the ethics committee of the hospital. All patients signed informed consent forms. 2.2. Methods 2.2.1. Nursing methods of control group In the control group, we used standard care to intervene during the patient’s procedure, which included fasting and abstinence before surgery. After the operation, we need to pay close attention to the changes in the patient’s condition, and intubation observation, once there is any abnormality, should be treated immediately. After surgery, we record the patient’s anal exhaust time and guide them and their family to eat after surgery. At the same time, we will also guide their out-of-bed activities according to their actual situation. 2.2.2. Specific implementation of the medical care integration mode Set up a medical integration model group: under the unified guidance of the leader, set up a medical integration group. The team is led by a team leader, a doctor in charge, and 3 nurses, each with clear responsibilities and the ability to work together. Medical teams work together to explore and continuously improve treatment and care plans in order to deliver immediate and effective interventions to patients. Preoperative nursing: before surgery, doctors and nurses jointly made ward rounds, visited patients, and introduced the advantages of Da Vinci robot surgery and possible problems during the operation in detail. The patient was instructed to do preoperative preparation such as fasting and abstinence from drinking. Intraoperative coordination: during the surgery, the medical team works closely together to implement the surgery, providing immediate feedback and resolution as soon as any problems are encountered. Postoperative care: timely visit patients, the medical team to maintain immediate communication, according to the doctor’s instructions to carry out effective pain relief treatment, closely monitor the patient’s drug side effects, once the problem is solved immediately; postoperative activities: help the patient turn over once every 2 hours after surgery, guide the patient and his family to do ankle pump exercises, roll over in bed, and other activities, explain the goal and importance of early exercise to the patient and his family, encourage the patient to get out of bed as soon as possible, take a simple indoor walk, strictly in accordance with the principle of gradual improvement, and exercise intensity should be able to withstand; guide patients on key matters after surgery. Explain key matters after surgery to patients and their relatives, guide patients on postoperative eating habits, and prevent postoperative complications according to patients’ breathing status and general condition. 2.3. Observation indicators The surgical process, the amount of blood loss during the operation, the time of the first postoperative exhaust, and the time of getting out of bed after the operation were analyzed in the 2 groups. Postoperative pain score (visual analog scale [VAS]), psychological status (self-rating anxiety scale [SAS] and self-rating depression scale [SDS]), quality of life score (36-Item Short Form Survey), postoperative catheter retention time, patient satisfaction, postoperative incision healing, postoperative urinary system infection incidence, postoperative nutritional status (serum albumin level and body weight change), and postoperative complications (such as pressure sores, lung infections, and intestinal obstruction). 2.4. Statistical analysis SPSS 20.0 software was used to analyze the collected data. For counting data, an X² test is used and expressed as a percentage. For the measurement data, normal distribution and variance homogeneity tests are performed first. For the normal distribution test of samples, Shapiro-Wilk test was used. If the P value is >0.05, the data are considered to be normally distributed. If the P value is <0.05, it is considered that the data does not follow a normal distribution. For the homogeneity test of variance, the Levene test was used. If the P value is >0.05, the variance is considered homogeneous. If the P value is <0.05, the variance is considered inconsistent. If the data are normally distributed and the variance is homogeneous, the t test is used, and the results are expressed as mean ± standard deviation ( x - ± s ). If the data do not follow a normal distribution or have uneven variances, a nonparametric test, such as the Wilcoxon rank-sum test, is used. P < .05 was used as the criterion to judge the difference was statistically significant.
This study was approved by the Ethics Committee of Wuhan Hospital of Integrated Traditional Chinese and Western Medicine. By retrospective analysis, 93 patients who underwent Da Vinci robot-assisted urology surgery in our hospital from August 2022 to March 2024 were selected as the study objects and divided into the intervention group and the control group according to whether the intervention mode of medical integration was carried out. Among them, there were 43 patients in the intervention group and 50 patients in the control group who received routine care. There were no significant differences in age, sex, and course of disease between the 2 groups ( P > .05). Inclusion criteria were as follows: the patients underwent Da Vinci robot-assisted urology surgery; age > 18 years old; the patient did not receive radiotherapy, chemotherapy, immunization, or targeted therapy before surgery; the patient had no distant metastasis; and complete clinical data of patients. Exclusion criteria were as follows: patients with other chronic diseases; patients with severe coagulation dysfunction; patients with other cancers; and the patient’s clinical data were incomplete. The clinical trial protocol was approved by the ethics committee of the hospital. All patients signed informed consent forms.
2.2.1. Nursing methods of control group In the control group, we used standard care to intervene during the patient’s procedure, which included fasting and abstinence before surgery. After the operation, we need to pay close attention to the changes in the patient’s condition, and intubation observation, once there is any abnormality, should be treated immediately. After surgery, we record the patient’s anal exhaust time and guide them and their family to eat after surgery. At the same time, we will also guide their out-of-bed activities according to their actual situation. 2.2.2. Specific implementation of the medical care integration mode Set up a medical integration model group: under the unified guidance of the leader, set up a medical integration group. The team is led by a team leader, a doctor in charge, and 3 nurses, each with clear responsibilities and the ability to work together. Medical teams work together to explore and continuously improve treatment and care plans in order to deliver immediate and effective interventions to patients. Preoperative nursing: before surgery, doctors and nurses jointly made ward rounds, visited patients, and introduced the advantages of Da Vinci robot surgery and possible problems during the operation in detail. The patient was instructed to do preoperative preparation such as fasting and abstinence from drinking. Intraoperative coordination: during the surgery, the medical team works closely together to implement the surgery, providing immediate feedback and resolution as soon as any problems are encountered. Postoperative care: timely visit patients, the medical team to maintain immediate communication, according to the doctor’s instructions to carry out effective pain relief treatment, closely monitor the patient’s drug side effects, once the problem is solved immediately; postoperative activities: help the patient turn over once every 2 hours after surgery, guide the patient and his family to do ankle pump exercises, roll over in bed, and other activities, explain the goal and importance of early exercise to the patient and his family, encourage the patient to get out of bed as soon as possible, take a simple indoor walk, strictly in accordance with the principle of gradual improvement, and exercise intensity should be able to withstand; guide patients on key matters after surgery. Explain key matters after surgery to patients and their relatives, guide patients on postoperative eating habits, and prevent postoperative complications according to patients’ breathing status and general condition.
In the control group, we used standard care to intervene during the patient’s procedure, which included fasting and abstinence before surgery. After the operation, we need to pay close attention to the changes in the patient’s condition, and intubation observation, once there is any abnormality, should be treated immediately. After surgery, we record the patient’s anal exhaust time and guide them and their family to eat after surgery. At the same time, we will also guide their out-of-bed activities according to their actual situation.
Set up a medical integration model group: under the unified guidance of the leader, set up a medical integration group. The team is led by a team leader, a doctor in charge, and 3 nurses, each with clear responsibilities and the ability to work together. Medical teams work together to explore and continuously improve treatment and care plans in order to deliver immediate and effective interventions to patients. Preoperative nursing: before surgery, doctors and nurses jointly made ward rounds, visited patients, and introduced the advantages of Da Vinci robot surgery and possible problems during the operation in detail. The patient was instructed to do preoperative preparation such as fasting and abstinence from drinking. Intraoperative coordination: during the surgery, the medical team works closely together to implement the surgery, providing immediate feedback and resolution as soon as any problems are encountered. Postoperative care: timely visit patients, the medical team to maintain immediate communication, according to the doctor’s instructions to carry out effective pain relief treatment, closely monitor the patient’s drug side effects, once the problem is solved immediately; postoperative activities: help the patient turn over once every 2 hours after surgery, guide the patient and his family to do ankle pump exercises, roll over in bed, and other activities, explain the goal and importance of early exercise to the patient and his family, encourage the patient to get out of bed as soon as possible, take a simple indoor walk, strictly in accordance with the principle of gradual improvement, and exercise intensity should be able to withstand; guide patients on key matters after surgery. Explain key matters after surgery to patients and their relatives, guide patients on postoperative eating habits, and prevent postoperative complications according to patients’ breathing status and general condition.
The surgical process, the amount of blood loss during the operation, the time of the first postoperative exhaust, and the time of getting out of bed after the operation were analyzed in the 2 groups. Postoperative pain score (visual analog scale [VAS]), psychological status (self-rating anxiety scale [SAS] and self-rating depression scale [SDS]), quality of life score (36-Item Short Form Survey), postoperative catheter retention time, patient satisfaction, postoperative incision healing, postoperative urinary system infection incidence, postoperative nutritional status (serum albumin level and body weight change), and postoperative complications (such as pressure sores, lung infections, and intestinal obstruction).
SPSS 20.0 software was used to analyze the collected data. For counting data, an X² test is used and expressed as a percentage. For the measurement data, normal distribution and variance homogeneity tests are performed first. For the normal distribution test of samples, Shapiro-Wilk test was used. If the P value is >0.05, the data are considered to be normally distributed. If the P value is <0.05, it is considered that the data does not follow a normal distribution. For the homogeneity test of variance, the Levene test was used. If the P value is >0.05, the variance is considered homogeneous. If the P value is <0.05, the variance is considered inconsistent. If the data are normally distributed and the variance is homogeneous, the t test is used, and the results are expressed as mean ± standard deviation ( x - ± s ). If the data do not follow a normal distribution or have uneven variances, a nonparametric test, such as the Wilcoxon rank-sum test, is used. P < .05 was used as the criterion to judge the difference was statistically significant.
3.1. Comparison of intraoperative conditions between the 2 groups In the comparison between the 2 groups, the mean operation time of patients in the intervention group was (233.54 ± 35.96 minutes), while that of patients in the control group was (236.42 ± 37.65 minutes), and the difference between the 2 groups was not statistically significant ( P > .05). During the operation, the amount of blood loss in the intervention group was significantly less than that in the control group ( P < .05) (Table ). 3.2. Comparison of postoperative conditions between the 2 groups 3.2.1. Comparison of first postoperative exhaust time and postoperative activity time between the 2 groups The mean time of first postoperative exhaust was 1.52 ± 0.51 days in the intervention group and 2.51 ± 0.84 days in the control group. The average time of getting out of bed was (1.03 ± 0.13 days in the intervention group and 1.25 ± 0.14 days in the control group. The time of first postoperative exhaust and time to get out of bed after the operation in the intervention group were shorter than those in the control group ( P < .05) (Table ). 3.2.2. Comparison of postoperative complications between the 2 groups The incidence of postoperative complications such as pressure ulcers, pulmonary infection, and intestinal obstruction was compared between the 2 groups. The incidence of complications was 4.66% in the intervention group and 16.00% in the control group. The incidence of complications in the intervention group was significantly lower than that in the control group ( P < .05) (Table ). 3.2.3. Postoperative pain VAS score In this study, the VAS score of postoperative pain in the intervention group was 3.21 ± 1.23 points, and that in the control group was 4.56 ± 1.54 points, indicating that the degree of postoperative pain in the intervention group was significantly lower than that in the control group, indicating that the integrated mode of medical care was significantly effective in relieving postoperative pain in patients (Table ) 3.2.4. SAS score of mental state and SDS score of mental state SAS scores of psychological state were 40.23 ± 5.67 in the intervention group and 48.56 ± 6.34 in the control group, indicating that the anxiety degree of patients in the intervention group was significantly lighter than that in the control group. SDS scores of psychological state were 42.34 ± 5.89 in the intervention group and 50.23 ± 6.56 in the control group, indicating that the degree of depression in the intervention group was significantly lower than that in the control group. It is suggested that the integrated mode of medical care can effectively reduce the anxiety and depression of patients after surgery and provide a better guarantee for the mental health of patients (Table ). 3.2.5. Quality of life score The scores of the 36-Item Short Form Survey quality of life were higher in the intervention group than in the control group. The scores of physiological function, physical pain, general health, vitality level, social function, emotional function, and mental health in the intervention group were 88.5 ± 11.2, 80.2 ± 10.5, 82.3 ± 12.1, 86.7 ± 10.3, 85.5 ± 12.5, 87.8 ± 11.8, 83.2 ± 10.6, and 84.6 ± 11.3, respectively. The corresponding scores of the control group were 75.6 ± 10.8, 68.5 ± 9.8, 70.1 ± 11.5, 72.3 ± 9.5, 70.2 ± 10.8, 72.5 ± 10.2, 69.8 ± 9.5, and 70.5 ± 10.1, respectively. The results showed a significant difference between the 2 groups ( P < .001), indicating that the intervention group using the integrated care model had better performance on all dimensions of quality of life (Table ). 3.2.6. Postoperative catheter indwelling time, patient satisfaction, grade A healing rate of postoperative incision, and incidence of postoperative urinary system infection were compared between the 2 groups The retention time of the catheter in the intervention group was 3.56 ± 1.23 days, and that in the control group was 4.87 ± 1.56 days, indicating that the retention time of the catheter in the intervention group was significantly shorter than that in the control group, suggesting that the integrated mode of medical care can help shorten the retention time of the catheter and reduce the risk of related complications and is worthy of wide application in clinical practice. The satisfaction of patients in the intervention group was 93.02%, and that in the control group was 78.00%, P < .001, indicating that the satisfaction of patients in the intervention group was significantly higher than that in the control group, suggesting that the integrated mode of medical care can effectively improve patient satisfaction. The grade A incision healing rate was 90.70% in the intervention group and 76.00% in the control group ( P < .001), indicating that the incision healing in the intervention group was better than that in the control group. The integrated nursing mode has a positive significance in promoting postoperative incision healing, and attention should be paid to the details of perioperative nursing to improve the quality of incision healing. The incidence of postoperative urinary system infection was 4.65% in the intervention group and 16.00% in the control group ( P < .001), indicating that the incidence of infection in the intervention group was significantly lower than that in the control group, suggesting that the integrated mode of medical care can effectively reduce the risk of postoperative urinary system infection and that infection prevention and control measures should be strictly implemented in the nursing process (Table ). 3.2.7. Decrease of serum albumin level and body weight after operation The decrease of serum albumin level in the intervention group was 35.67 ± 5.23 g/L before operation and 34.44 ± 4.56 g/L after operation. The average level of serum albumin in the control group was 35.89 ± 5.56 g/L before surgery and 33.33 ± 4.21 g/L after surgery, indicating that the decrease of serum albumin in the intervention group was smaller than that in the control group ( P < .001). The average body weight of the intervention group was 65.23 ± 8.56 kg before surgery and 64.36 ± 8.12 kg after surgery. The average body weight of the control group was 65.56 ± 8.87 kg before surgery and 64.00 ± 7.56 kg after surgery ( P < .001), indicating that the postoperative weight change of the intervention group was smaller than that of the control group. These results suggest that the integrated care model has a significant advantage in maintaining the postoperative nutritional status of patients. For serum albumin levels, the smaller reduction in the intervention group meant that the protein nutritional status of the patients was better preserved, contributing to the recovery and repair of the body. In terms of weight change, the smaller fluctuations also reflected the relatively stable nutritional intake and metabolic balance of patients in the intervention group after surgery (Table ).
In the comparison between the 2 groups, the mean operation time of patients in the intervention group was (233.54 ± 35.96 minutes), while that of patients in the control group was (236.42 ± 37.65 minutes), and the difference between the 2 groups was not statistically significant ( P > .05). During the operation, the amount of blood loss in the intervention group was significantly less than that in the control group ( P < .05) (Table ).
3.2.1. Comparison of first postoperative exhaust time and postoperative activity time between the 2 groups The mean time of first postoperative exhaust was 1.52 ± 0.51 days in the intervention group and 2.51 ± 0.84 days in the control group. The average time of getting out of bed was (1.03 ± 0.13 days in the intervention group and 1.25 ± 0.14 days in the control group. The time of first postoperative exhaust and time to get out of bed after the operation in the intervention group were shorter than those in the control group ( P < .05) (Table ). 3.2.2. Comparison of postoperative complications between the 2 groups The incidence of postoperative complications such as pressure ulcers, pulmonary infection, and intestinal obstruction was compared between the 2 groups. The incidence of complications was 4.66% in the intervention group and 16.00% in the control group. The incidence of complications in the intervention group was significantly lower than that in the control group ( P < .05) (Table ). 3.2.3. Postoperative pain VAS score In this study, the VAS score of postoperative pain in the intervention group was 3.21 ± 1.23 points, and that in the control group was 4.56 ± 1.54 points, indicating that the degree of postoperative pain in the intervention group was significantly lower than that in the control group, indicating that the integrated mode of medical care was significantly effective in relieving postoperative pain in patients (Table ) 3.2.4. SAS score of mental state and SDS score of mental state SAS scores of psychological state were 40.23 ± 5.67 in the intervention group and 48.56 ± 6.34 in the control group, indicating that the anxiety degree of patients in the intervention group was significantly lighter than that in the control group. SDS scores of psychological state were 42.34 ± 5.89 in the intervention group and 50.23 ± 6.56 in the control group, indicating that the degree of depression in the intervention group was significantly lower than that in the control group. It is suggested that the integrated mode of medical care can effectively reduce the anxiety and depression of patients after surgery and provide a better guarantee for the mental health of patients (Table ). 3.2.5. Quality of life score The scores of the 36-Item Short Form Survey quality of life were higher in the intervention group than in the control group. The scores of physiological function, physical pain, general health, vitality level, social function, emotional function, and mental health in the intervention group were 88.5 ± 11.2, 80.2 ± 10.5, 82.3 ± 12.1, 86.7 ± 10.3, 85.5 ± 12.5, 87.8 ± 11.8, 83.2 ± 10.6, and 84.6 ± 11.3, respectively. The corresponding scores of the control group were 75.6 ± 10.8, 68.5 ± 9.8, 70.1 ± 11.5, 72.3 ± 9.5, 70.2 ± 10.8, 72.5 ± 10.2, 69.8 ± 9.5, and 70.5 ± 10.1, respectively. The results showed a significant difference between the 2 groups ( P < .001), indicating that the intervention group using the integrated care model had better performance on all dimensions of quality of life (Table ). 3.2.6. Postoperative catheter indwelling time, patient satisfaction, grade A healing rate of postoperative incision, and incidence of postoperative urinary system infection were compared between the 2 groups The retention time of the catheter in the intervention group was 3.56 ± 1.23 days, and that in the control group was 4.87 ± 1.56 days, indicating that the retention time of the catheter in the intervention group was significantly shorter than that in the control group, suggesting that the integrated mode of medical care can help shorten the retention time of the catheter and reduce the risk of related complications and is worthy of wide application in clinical practice. The satisfaction of patients in the intervention group was 93.02%, and that in the control group was 78.00%, P < .001, indicating that the satisfaction of patients in the intervention group was significantly higher than that in the control group, suggesting that the integrated mode of medical care can effectively improve patient satisfaction. The grade A incision healing rate was 90.70% in the intervention group and 76.00% in the control group ( P < .001), indicating that the incision healing in the intervention group was better than that in the control group. The integrated nursing mode has a positive significance in promoting postoperative incision healing, and attention should be paid to the details of perioperative nursing to improve the quality of incision healing. The incidence of postoperative urinary system infection was 4.65% in the intervention group and 16.00% in the control group ( P < .001), indicating that the incidence of infection in the intervention group was significantly lower than that in the control group, suggesting that the integrated mode of medical care can effectively reduce the risk of postoperative urinary system infection and that infection prevention and control measures should be strictly implemented in the nursing process (Table ). 3.2.7. Decrease of serum albumin level and body weight after operation The decrease of serum albumin level in the intervention group was 35.67 ± 5.23 g/L before operation and 34.44 ± 4.56 g/L after operation. The average level of serum albumin in the control group was 35.89 ± 5.56 g/L before surgery and 33.33 ± 4.21 g/L after surgery, indicating that the decrease of serum albumin in the intervention group was smaller than that in the control group ( P < .001). The average body weight of the intervention group was 65.23 ± 8.56 kg before surgery and 64.36 ± 8.12 kg after surgery. The average body weight of the control group was 65.56 ± 8.87 kg before surgery and 64.00 ± 7.56 kg after surgery ( P < .001), indicating that the postoperative weight change of the intervention group was smaller than that of the control group. These results suggest that the integrated care model has a significant advantage in maintaining the postoperative nutritional status of patients. For serum albumin levels, the smaller reduction in the intervention group meant that the protein nutritional status of the patients was better preserved, contributing to the recovery and repair of the body. In terms of weight change, the smaller fluctuations also reflected the relatively stable nutritional intake and metabolic balance of patients in the intervention group after surgery (Table ).
The mean time of first postoperative exhaust was 1.52 ± 0.51 days in the intervention group and 2.51 ± 0.84 days in the control group. The average time of getting out of bed was (1.03 ± 0.13 days in the intervention group and 1.25 ± 0.14 days in the control group. The time of first postoperative exhaust and time to get out of bed after the operation in the intervention group were shorter than those in the control group ( P < .05) (Table ).
The incidence of postoperative complications such as pressure ulcers, pulmonary infection, and intestinal obstruction was compared between the 2 groups. The incidence of complications was 4.66% in the intervention group and 16.00% in the control group. The incidence of complications in the intervention group was significantly lower than that in the control group ( P < .05) (Table ).
In this study, the VAS score of postoperative pain in the intervention group was 3.21 ± 1.23 points, and that in the control group was 4.56 ± 1.54 points, indicating that the degree of postoperative pain in the intervention group was significantly lower than that in the control group, indicating that the integrated mode of medical care was significantly effective in relieving postoperative pain in patients (Table )
SAS scores of psychological state were 40.23 ± 5.67 in the intervention group and 48.56 ± 6.34 in the control group, indicating that the anxiety degree of patients in the intervention group was significantly lighter than that in the control group. SDS scores of psychological state were 42.34 ± 5.89 in the intervention group and 50.23 ± 6.56 in the control group, indicating that the degree of depression in the intervention group was significantly lower than that in the control group. It is suggested that the integrated mode of medical care can effectively reduce the anxiety and depression of patients after surgery and provide a better guarantee for the mental health of patients (Table ).
The scores of the 36-Item Short Form Survey quality of life were higher in the intervention group than in the control group. The scores of physiological function, physical pain, general health, vitality level, social function, emotional function, and mental health in the intervention group were 88.5 ± 11.2, 80.2 ± 10.5, 82.3 ± 12.1, 86.7 ± 10.3, 85.5 ± 12.5, 87.8 ± 11.8, 83.2 ± 10.6, and 84.6 ± 11.3, respectively. The corresponding scores of the control group were 75.6 ± 10.8, 68.5 ± 9.8, 70.1 ± 11.5, 72.3 ± 9.5, 70.2 ± 10.8, 72.5 ± 10.2, 69.8 ± 9.5, and 70.5 ± 10.1, respectively. The results showed a significant difference between the 2 groups ( P < .001), indicating that the intervention group using the integrated care model had better performance on all dimensions of quality of life (Table ).
The retention time of the catheter in the intervention group was 3.56 ± 1.23 days, and that in the control group was 4.87 ± 1.56 days, indicating that the retention time of the catheter in the intervention group was significantly shorter than that in the control group, suggesting that the integrated mode of medical care can help shorten the retention time of the catheter and reduce the risk of related complications and is worthy of wide application in clinical practice. The satisfaction of patients in the intervention group was 93.02%, and that in the control group was 78.00%, P < .001, indicating that the satisfaction of patients in the intervention group was significantly higher than that in the control group, suggesting that the integrated mode of medical care can effectively improve patient satisfaction. The grade A incision healing rate was 90.70% in the intervention group and 76.00% in the control group ( P < .001), indicating that the incision healing in the intervention group was better than that in the control group. The integrated nursing mode has a positive significance in promoting postoperative incision healing, and attention should be paid to the details of perioperative nursing to improve the quality of incision healing. The incidence of postoperative urinary system infection was 4.65% in the intervention group and 16.00% in the control group ( P < .001), indicating that the incidence of infection in the intervention group was significantly lower than that in the control group, suggesting that the integrated mode of medical care can effectively reduce the risk of postoperative urinary system infection and that infection prevention and control measures should be strictly implemented in the nursing process (Table ).
The decrease of serum albumin level in the intervention group was 35.67 ± 5.23 g/L before operation and 34.44 ± 4.56 g/L after operation. The average level of serum albumin in the control group was 35.89 ± 5.56 g/L before surgery and 33.33 ± 4.21 g/L after surgery, indicating that the decrease of serum albumin in the intervention group was smaller than that in the control group ( P < .001). The average body weight of the intervention group was 65.23 ± 8.56 kg before surgery and 64.36 ± 8.12 kg after surgery. The average body weight of the control group was 65.56 ± 8.87 kg before surgery and 64.00 ± 7.56 kg after surgery ( P < .001), indicating that the postoperative weight change of the intervention group was smaller than that of the control group. These results suggest that the integrated care model has a significant advantage in maintaining the postoperative nutritional status of patients. For serum albumin levels, the smaller reduction in the intervention group meant that the protein nutritional status of the patients was better preserved, contributing to the recovery and repair of the body. In terms of weight change, the smaller fluctuations also reflected the relatively stable nutritional intake and metabolic balance of patients in the intervention group after surgery (Table ).
In recent years, due to the continuous progress of global medical technology and the continuous update of medical equipment, traditional diagnosis and treatment schemes have gradually failed to meet the needs. Among urological surgical treatment methods, minimally invasive surgery has become the preferred mode, but the surgical method is no longer the traditional open or laparoscopic treatment. In 2002, Hashizume et al successfully completed urological surgery by using the Da Vinci robotic surgical system for the first time. In 2010, another team took the lead in introducing the Da Vinci robotic surgery system into China. Up to now, more than 100 hospitals in China can carry out Da Vinci robotic surgery. The surgical cycle of Da Vinci robotic surgery is longer than that of traditional open surgery and laparoscopic surgery, but it has obvious advantages in the amount of blood loss during surgery. The Da Vinci robotic surgical system is able to increase its flexibility with the help of its internal articular wrist, reproducing human surgical movements. This technology can transform the hand movements of the surgeon into precise control of the equipment in the surgical area so that the blood vessels and tissues around the surgical area can be fully displayed in the limited space, thus minimizing the possibility of operation errors. In addition, it can automatically filter the natural vibrations of the operator’s hand, making the operation more accurate. The endoscope system can realize high-definition display of 3D images, a realistic surgical field of view, accurate anatomical level, and clear spatial positioning. The operation on the remote operating table can reduce the fatigue and discomfort during the operation, which is beneficial for the difficult and time-consuming urological malignant tumor resection. During urological surgery, patients’ fear of the disease and lack of understanding of treatment and care plans often hinder their treatment and postoperative recovery. At this time, it is necessary to intervene in time to improve the treatment effect of patients. In recent years, the integrated nursing model guided by the concept of rapid rehabilitation surgery has been used in perioperative nursing of various operations. Through the intervention of medical and nursing integration mode, multidisciplinary team members such as doctors and nurses can give full play to the collaborative effect. [ – ] Through the implementation of a variety of optimized surgical nursing programs, we can reduce the various side effects caused by surgery and reduce the incidence of complications. At the same time, this model can also connect attending doctors and nursing staff closely to provide rehabilitation care for patients in the form of a group. [ – ] Often, nurses will rely on their past nursing experience to provide care according to the doctor’s instructions, which carries some random and blind characteristics. [ – ] Compared with the conventional nursing mode, the integrated nursing mode is more conducive to the rehabilitation of patients. After a retrospective study in this paper, we found that under the same surgical mode, compared with the traditional nursing mode, the amount of blood loss during surgery in urological patients was significantly reduced, the time of first postoperative exhaust and getting out of bed was also significantly shortened, and the incidence of postoperative complications was also lower. This fully proves that the integrated medical care model has better effects on perioperative treatment and rehabilitation of patients. [ – ] In terms of pain management, the postoperative pain VAS score of the intervention group was significantly lower than that of the control group, which may be due to the close cooperation between the medical and nursing staff so that more timely and effective pain relief measures can be taken. In terms of psychological state, SAS and SDS scores of the intervention group were better than those of the control group, indicating that the integrated medical care model provided more comprehensive psychological support for patients, alleviated the psychological pressure brought by surgery, and contributed to the recovery of patients. The quality of life score, catheter indwelling time, patient satisfaction, grade A healing rate of postoperative incision and incidence of postoperative urinary system infection further confirmed the superiority of this model. The improvement of life quality reflects the good effect of overall nursing. The catheter indwelling time is shortened, reducing the patient’s discomfort and the risk of infection. High patient satisfaction reflects patients’ recognition of integrated medical care services. Better wound healing and lower incidence of infection show the positive effect of this model in promoting postoperative recovery and preventing complications. In terms of postoperative nutritional status, the change range of serum albumin level and body weight in the intervention group was smaller than that in the control group, which indicates that the integrated mode of medical care can better guarantee the nutritional intake and metabolic balance of patients and provide a good basis for physical recovery. Therefore, in the process of Da Vinci robot-assisted urology surgery, this method has shown excellent clinical effects and is worthy of promotion and application. This study has certain limitations that need to be addressed. First, the relatively small sample size (93 patients) may restrict the generalizability of the findings. Larger sample sizes could provide more robust statistical power and confirm the applicability of the results to broader populations. Additionally, the study was conducted in a single center, which could introduce geographical and institutional biases. For example, variations in hospital resources, caregiver training programs, and patient management protocols might have influenced the outcomes. Furthermore, differences in caregiver experience, particularly among the medical and nursing staff involved in the integrated care model, might have impacted the consistency of intervention delivery. Highly experienced caregivers may have contributed to better patient outcomes, whereas less experienced staff may require additional training to achieve similar results. The study did not stratify or control for caregiver experience, which could be an important confounding factor. Another consideration is the complexity of the surgeries included in the analysis. While Da Vinci robot-assisted procedures generally follow standardized protocols, variations in the complexity of cases, such as tumor size, anatomical challenges, or the presence of comorbidities, may have influenced surgical outcomes, including blood loss and complication rates. These factors were not explicitly analyzed or controlled for in this study, which may limit the interpretation of the results. Finally, the retrospective nature of the study inherently carries risks of bias, including potential inaccuracies in data recording and patient recall. Prospective randomized controlled trials would provide stronger evidence and eliminate many of the biases associated with retrospective designs. Moreover, the lack of long-term follow-up data means that the impact of the integrated care model on quality of life, recurrence rates, and other long-term outcomes remains unclear. Future research should aim to address these gaps by including multicenter, large-sample studies with long-term follow-up and stratification based on caregiver experience and case complexity. Compared with other similar studies, this study also confirmed the advantages of the integrated care model in improving the surgical indicators and rehabilitation outcomes of patients. However, other studies may pay more attention to the rehabilitation guidance of patients after discharge and the connection of community care, while this study mainly focuses on the perioperative effect of the Da Vinci robot, especially the amount of intraoperative blood loss and complications. This study included more diverse types of surgery and patient groups, thus providing broader evidence for the application of the integrated care model. By comparing this study with other studies, we can further clarify the characteristics and contributions of this study and also provide a reference for the future research direction of Da Vinci robot nursing, so as to continuously improve and optimize the application of the integrated medical care model in urology surgery.
To sum up, the integrated medical care model significantly reduced the risk of bleeding and complications during Da Vinci robot-assisted urology surgery, which confirms its scientific feasibility and has high safety and effectiveness. Optimized and reasonable nursing plans could be formulated according to the actual situation and technical level of hospitals and patients, and they could be widely used in clinical nursing practice, which has important technical and clinical value.
Conceptualization: Ting Wang, Youqun Zhang. Data curation: Ting Wang, Yu Hu, Youqun Zhang. Formal analysis: Ting Wang, Yu Hu, Youqun Zhang. Investigation: Ting Wang, Yu Hu. Methodology: Ting Wang. Writing—original draft: Ting Wang, Yu Hu, Youqun Zhang. Writing—review & editing: Ting Wang. Validation: Youqun Zhang. Visualization: Youqun Zhang.
|
Three-dimensional quantitative temporomandibular joint changes in skeletal class I malocclusion treated with extraction and non-extraction protocols: a comparative study of fixed orthodontic appliances and clear aligners | a0a4da1c-d7f7-402e-aca4-e3442858d41b | 11743406 | Dentistry[mh] | The temporomandibular joint (TMJ) is a complex joint facilitating mandibular movement and adapting structurally in response to external factors such as age, muscle activity, and occlusal forces . Malocclusions, such as crossbites, crowding, and missing teeth, are known to influence TMJ morphology and function. However, the relationship between malocclusion and TMJ remains unclear, with some studies showing associations , while others do not . Most clinical evidence indicates that no direct causal relationship between orthodontic treatment and temporomandibular joint disorders (TMDs) . The relationship between dental occlusion and TMD is debated, with systematic reviews finding no definitive evidence linking occlusal features to TMDs . Orthodontic treatment aims to restore balanced occlusion, which may impact TMJ position and stability . However, evidence regarding their role in TMD development remains inconclusive. Some studies suggest orthodontic interventions may promote TMJ remodeling and improve condyle-glenoid fossa relationship, enhancing joint function . Other researches suggested that orthodontic appliances may disrupt occlusal stability, potentially triggering or worsening TMDs . These findings underscore the need for rigorous research to clarify the complex link between orthodontic treatments, occlusal dynamics, and TMDs. The orthodontic community is similarly divided on the impact of extractions, with proponents citing benefits for crowding and vertical dimension control , while opponents argue that extractions may contribute to TMJ dysfunction . Traditional fixed orthodontic appliances (FAs), have long been the standard in orthodontic treatment, providing robust control over tooth movement, Whereas clear aligners (CAs) have emerged as a popular alternative due to their aesthetic advantages and less intrusive impact on TMJ dynamics . Unlike FAs, CAs offer complete tooth crown coverage and precise force application through digitally designed attachments, potentially enhancing control of three-dimensional tooth movement. The aligner thickness at the occlusal surface can act as a 'bite-block,’ aiding vertical dimension control . Despite these benefits, controlling vertical tooth movement remains challenging in orthodontics. Precise movement control is crucial to avoid complications like mandibular rotation . While CAs offer better oral hygiene and reduced enamel demineralization risk, their effectiveness in controlling specific tooth movements is still debated . While cone beam computed tomography (CBCT) has enhanced our ability to assess TMJ structures in three-dimensions (3D) that surpasses conventional methods , limited research has focused on evaluating TMJ positional and morphological changes in adults pre- and post-treatment with FAs and CAs, particularly in extraction and non-extraction cases . To the best of the authors’ knowledge, this study is the first to use 3D CBCT to comprehensively compare TMJ structural changes between these approaches. It aims to clarify the impact of FAs and CAs on TMJ stability and remodeling, building on prior research that has highlighted both the potential benefits and risks of orthodontic treatments for TMJ health. This study aimed to address gaps in the literature by providing new insights into the effects of FAs and CAs on TMJ adaptations through three-dimensional evaluation of TMJ structural changes following treatment with FAs and CAs in both extraction and non-extraction cases, it contributes to refining orthodontic treatment planning and appliance selection for optimal TMJ outcome. Sample selection The Ethics Committee of the China Medical University School of Stomatology approved this study (Ethics Approval No. CMUKQ-2024-019), and informed consent was obtained from all participants. All methods were carried out in accordance with the principles of the declaration of Helsinki. This research involved a retrospective review of adult patients’ records treated with FAs or CAs who underwent non-extraction and extraction treatments between 2017 and 2024. The patient selection process adhered to strict inclusion and exclusion criteria to ensure uniformity across groups. From an initial pool of 500 patients, 120 adult cases were selected after applying the criteria. Participants were categorized into four equal groups (n = 30 per group) based on the treatment modality (FAs or CAs) and treatment protocol (extraction or non-extraction) to allow for direct comparisons of TMJ adaptations under these different clinical scenarios in adults (non-growing patients). All patients meeting the following selection criteria were included: (1) age over 18 years; (2) skeletal Class I malocclusion with moderate crowding in both arches; (3) no missing teeth (second molar to second molar) for the non-extraction group, and four first premolars extracted for the extraction group; (4) no history of TMD symptoms was reported, and TMD diagnoses, based on the DC/TMD, were derived from dental records by assessing TMJ pain, sounds, and mandibular range of motion ; and (5) complete treatment records with high-quality pre- and post-treatment CBCT images. Exclusion criteria were: (1) previous orthodontic, prosthodontic, or orthognathic treatment; (2) presence of impacted, supernumerary, or missing teeth; (3) facial asymmetry or functional mandibular deviations; (4) craniofacial syndromes; and (5) periodontal disease. Figure provides a flowchart detailing the patient selection and allocation process, illustrating screening, exclusions, and final group distributions. This flowchart details the screening process, exclusions, and final group distribution, ensuring transparency and reproducibility. Sample size estimation was calculated using G*Power 3.1, considering a 5% significance level and 95% power, based on the SN-MP angle differences reported by Wang et al. ; a sample of 27 patients per group was needed, but this was subsequently increased to 30 for robustness, examining a total of 240 patients data. Each group had 60 patients divided equally between premolars extraction and non-extraction treatment. In the study, non-extraction patients in the FAs group received treatment with the Damon Q self-locking bracket system from Ormco, US. Treatment started with 0.014-inch or 0.016-inch NiTi wires, progressing to 0.016 × 0.022-inch NiTi wires, and then to stainless steel of the same dimension. After alignment and leveling, molar distalization was achieved using Class I elastics (1/4-in, 4.5 oz, Ormco Corp.) attached from mini-screws to hooks between the lateral incisors and canines . Mini-screws (2 mm diameter, 10 or 12 mm length; Bioray, Taiwan) were placed in the infrazygomatic crest area between the maxillary first and second molars, and in the buccal shelf area between the mandibular first and second molars . Class II elastics were also used to exert a distalization force of about 300 g . In the CAs group, comprehensive orthodontic treatment using Clear Aligners (Align Technology) was applied to all teeth, including second molars. Treatment involved custom attachments per the manufacturer's and orthodontist's specifications. Align Technology's protocol followed a staged approach for sequential molar distalization, with each aligner moving teeth by 0.25 mm. Molar distalization was enhanced by elastics (1/4-inch, 4.5 oz, Ormco Corp.) attached from canines to mini-screws. After distalization, a refinement period was necessary. Patients wore each aligner for 22 h daily for 7–10 days, progressing to the next aligner after a six-week evaluation. In the extraction groups, extraction space was used to alleviate crowding and retract incisors in both arches. Both FAs and CAs treatments involved three phases: alignment and leveling, space closure, and fine-tuning or finshing. For the FAs group, mini-screws retracted incisors, with archwire adjustments and intermaxillary elastics used for space closure. Treatment concluded once the teeth were properly aligned and all spaces were closed. For CAs treatments, a standardized protocol was used with attachments on canines, second premolars, and molars for alignment. The process began with the retraction of canines, followed by incisors, repeating as needed to close extraction spaces. Class I, II, and vertical elastics supported space closure and bite adjustments. Patients wore aligners for at least 22 h daily, switching them bi-weekly. Typically, two refinement phases corrected bite openings or completed space closure. Class II elastics ensured dental anchorage, prevented anterior tooth flaring, and enhanced intercuspation . All treatments were conducted by a highly experienced orthodontist, ensuring reliable execution. Factors such as malocclusion severity, biomechanics, extent of tooth movement (extraction and non-extraction), and treatment outcome quality were considered to assess and compare difficulty levels between patient groups. The American Board of Orthodontics (ABO) discrepancy index was used to evaluate case difficulty , providing a standardized measure of treatment challenges across all studied groups. CBCT analysis CBCT analysis was conducted using the iCAT CBCT System (KaVo 3D eXam, KaVo Dental, Germany) with a 23 × 17 cm field of view (FOV), 37.1 MAs exposure, 17.8 s scan duration, and 120 kV settings. Images had a 0.3 mm slice thickness and voxel size, with a resolution of 768 × 768 pixels. To minimize motion artifacts, patients maintained the Frankfort Horizontal Plane (FHP) parallel to the floor and were instructed to remain still during the scanning process. The imaging settings were optimized to balance resolution and radiation exposure. The FOV ensured comprehensive bilateral TMJ coverage, while a voxel size of 0.3 mm provided adequate spatial resolution for precise TMJ landmarks identification, as recommended in standard protocols . Pre- and post-treatment CBCT scans were converted into DICOM format and analyzed using Invivo 6.0 software (Anatomage, San Jose, CA, USA). The three-dimensional TMJ analysis methodology adopted in our study was based on the protocol established by Alhammadi et al. . This included standardized identification of skeletal and TMJ landmarks, outlined in supplementary material 1 (Fig. ), with corresponding reference planes and lines presented in supplementary material 2 and the measurements detailed in supplementary material 3 (Fig. ). The accuracy and precision of the condylar position were assessed using two methods. First, the condylar position relative to basal craniofacial structures was evaluated using reference planes (MSP, HP, and VP). Second, the formula by Pullinger et al. ((P − A)/(P + A) × 100%) determined the condyle's centrality on the sagittal slice. Condyle positions were categorized using the Pullinger formula as posterior (< − 12%), anterior (> + 12%), or concentric (± 12%), which helps assess joint stability and remodeling during treatment. Subsequent CBCT images were taken with a 20 × 25 cm field of view, at 110 kV and 8.8 mAs, with an 18-s exposure time. The voxel dimension was 0.3 mm, and the slice thickness was 2 mm, ensuring FHP alignment with a laser guide. TMJ space was segmented into six 1.5 mm sections per side, and volumes were calculated using the sigma volume equation v ≅ Σ k = 1 A (x ˙ I )Δx (Fig. ) . Intra- and inter-observer reliability were evaluated by randomly selecting 24 CBCT images, which were measured independently one month later to confirm consistency. Measurements were conducted twice within a two-week interval by two observers to ensure reliability. Statistical analysis Data analysis was conducted using SPSS software, version 26 (IBM Corp., Armonk, NY, USA). To ensure the reliability and accuracy of measurements, Intra-class Correlation Coefficients (ICCs) were calculated, and Technical Error of Measurement (TEM) along with Relative Technical Error of Measurement (rTEM) were assessed. Descriptive statistics were used to summarize the data, with continuous variables reported as means and standard deviation (SD), and categorical variables presented as frequencies and percentages. The Shapiro–Wilk test was employed to assess the normality of the data. For statistical comparisons, paired t-tests were used to evaluate intra-group comparisons, while independent t-tests were applied for inter-group comparisons. Baseline differences between groups were assessed using independent t tests for continuous variables and Chi-squared tests for categorical variables. Statistical significance was set at p ≤ 0.05 and effect size was measured using Cohen's d when significant results were observed. The Ethics Committee of the China Medical University School of Stomatology approved this study (Ethics Approval No. CMUKQ-2024-019), and informed consent was obtained from all participants. All methods were carried out in accordance with the principles of the declaration of Helsinki. This research involved a retrospective review of adult patients’ records treated with FAs or CAs who underwent non-extraction and extraction treatments between 2017 and 2024. The patient selection process adhered to strict inclusion and exclusion criteria to ensure uniformity across groups. From an initial pool of 500 patients, 120 adult cases were selected after applying the criteria. Participants were categorized into four equal groups (n = 30 per group) based on the treatment modality (FAs or CAs) and treatment protocol (extraction or non-extraction) to allow for direct comparisons of TMJ adaptations under these different clinical scenarios in adults (non-growing patients). All patients meeting the following selection criteria were included: (1) age over 18 years; (2) skeletal Class I malocclusion with moderate crowding in both arches; (3) no missing teeth (second molar to second molar) for the non-extraction group, and four first premolars extracted for the extraction group; (4) no history of TMD symptoms was reported, and TMD diagnoses, based on the DC/TMD, were derived from dental records by assessing TMJ pain, sounds, and mandibular range of motion ; and (5) complete treatment records with high-quality pre- and post-treatment CBCT images. Exclusion criteria were: (1) previous orthodontic, prosthodontic, or orthognathic treatment; (2) presence of impacted, supernumerary, or missing teeth; (3) facial asymmetry or functional mandibular deviations; (4) craniofacial syndromes; and (5) periodontal disease. Figure provides a flowchart detailing the patient selection and allocation process, illustrating screening, exclusions, and final group distributions. This flowchart details the screening process, exclusions, and final group distribution, ensuring transparency and reproducibility. Sample size estimation was calculated using G*Power 3.1, considering a 5% significance level and 95% power, based on the SN-MP angle differences reported by Wang et al. ; a sample of 27 patients per group was needed, but this was subsequently increased to 30 for robustness, examining a total of 240 patients data. Each group had 60 patients divided equally between premolars extraction and non-extraction treatment. In the study, non-extraction patients in the FAs group received treatment with the Damon Q self-locking bracket system from Ormco, US. Treatment started with 0.014-inch or 0.016-inch NiTi wires, progressing to 0.016 × 0.022-inch NiTi wires, and then to stainless steel of the same dimension. After alignment and leveling, molar distalization was achieved using Class I elastics (1/4-in, 4.5 oz, Ormco Corp.) attached from mini-screws to hooks between the lateral incisors and canines . Mini-screws (2 mm diameter, 10 or 12 mm length; Bioray, Taiwan) were placed in the infrazygomatic crest area between the maxillary first and second molars, and in the buccal shelf area between the mandibular first and second molars . Class II elastics were also used to exert a distalization force of about 300 g . In the CAs group, comprehensive orthodontic treatment using Clear Aligners (Align Technology) was applied to all teeth, including second molars. Treatment involved custom attachments per the manufacturer's and orthodontist's specifications. Align Technology's protocol followed a staged approach for sequential molar distalization, with each aligner moving teeth by 0.25 mm. Molar distalization was enhanced by elastics (1/4-inch, 4.5 oz, Ormco Corp.) attached from canines to mini-screws. After distalization, a refinement period was necessary. Patients wore each aligner for 22 h daily for 7–10 days, progressing to the next aligner after a six-week evaluation. In the extraction groups, extraction space was used to alleviate crowding and retract incisors in both arches. Both FAs and CAs treatments involved three phases: alignment and leveling, space closure, and fine-tuning or finshing. For the FAs group, mini-screws retracted incisors, with archwire adjustments and intermaxillary elastics used for space closure. Treatment concluded once the teeth were properly aligned and all spaces were closed. For CAs treatments, a standardized protocol was used with attachments on canines, second premolars, and molars for alignment. The process began with the retraction of canines, followed by incisors, repeating as needed to close extraction spaces. Class I, II, and vertical elastics supported space closure and bite adjustments. Patients wore aligners for at least 22 h daily, switching them bi-weekly. Typically, two refinement phases corrected bite openings or completed space closure. Class II elastics ensured dental anchorage, prevented anterior tooth flaring, and enhanced intercuspation . All treatments were conducted by a highly experienced orthodontist, ensuring reliable execution. Factors such as malocclusion severity, biomechanics, extent of tooth movement (extraction and non-extraction), and treatment outcome quality were considered to assess and compare difficulty levels between patient groups. The American Board of Orthodontics (ABO) discrepancy index was used to evaluate case difficulty , providing a standardized measure of treatment challenges across all studied groups. CBCT analysis was conducted using the iCAT CBCT System (KaVo 3D eXam, KaVo Dental, Germany) with a 23 × 17 cm field of view (FOV), 37.1 MAs exposure, 17.8 s scan duration, and 120 kV settings. Images had a 0.3 mm slice thickness and voxel size, with a resolution of 768 × 768 pixels. To minimize motion artifacts, patients maintained the Frankfort Horizontal Plane (FHP) parallel to the floor and were instructed to remain still during the scanning process. The imaging settings were optimized to balance resolution and radiation exposure. The FOV ensured comprehensive bilateral TMJ coverage, while a voxel size of 0.3 mm provided adequate spatial resolution for precise TMJ landmarks identification, as recommended in standard protocols . Pre- and post-treatment CBCT scans were converted into DICOM format and analyzed using Invivo 6.0 software (Anatomage, San Jose, CA, USA). The three-dimensional TMJ analysis methodology adopted in our study was based on the protocol established by Alhammadi et al. . This included standardized identification of skeletal and TMJ landmarks, outlined in supplementary material 1 (Fig. ), with corresponding reference planes and lines presented in supplementary material 2 and the measurements detailed in supplementary material 3 (Fig. ). The accuracy and precision of the condylar position were assessed using two methods. First, the condylar position relative to basal craniofacial structures was evaluated using reference planes (MSP, HP, and VP). Second, the formula by Pullinger et al. ((P − A)/(P + A) × 100%) determined the condyle's centrality on the sagittal slice. Condyle positions were categorized using the Pullinger formula as posterior (< − 12%), anterior (> + 12%), or concentric (± 12%), which helps assess joint stability and remodeling during treatment. Subsequent CBCT images were taken with a 20 × 25 cm field of view, at 110 kV and 8.8 mAs, with an 18-s exposure time. The voxel dimension was 0.3 mm, and the slice thickness was 2 mm, ensuring FHP alignment with a laser guide. TMJ space was segmented into six 1.5 mm sections per side, and volumes were calculated using the sigma volume equation v ≅ Σ k = 1 A (x ˙ I )Δx (Fig. ) . Intra- and inter-observer reliability were evaluated by randomly selecting 24 CBCT images, which were measured independently one month later to confirm consistency. Measurements were conducted twice within a two-week interval by two observers to ensure reliability. Data analysis was conducted using SPSS software, version 26 (IBM Corp., Armonk, NY, USA). To ensure the reliability and accuracy of measurements, Intra-class Correlation Coefficients (ICCs) were calculated, and Technical Error of Measurement (TEM) along with Relative Technical Error of Measurement (rTEM) were assessed. Descriptive statistics were used to summarize the data, with continuous variables reported as means and standard deviation (SD), and categorical variables presented as frequencies and percentages. The Shapiro–Wilk test was employed to assess the normality of the data. For statistical comparisons, paired t-tests were used to evaluate intra-group comparisons, while independent t-tests were applied for inter-group comparisons. Baseline differences between groups were assessed using independent t tests for continuous variables and Chi-squared tests for categorical variables. Statistical significance was set at p ≤ 0.05 and effect size was measured using Cohen's d when significant results were observed. Sample and descriptive data In the non-extraction group, 30 patients received FAs (mean age: 22.21 ± 5.20 years) and 30 received CAs (mean age: 24.27 ± 4.27 years). In the extraction group, 30 patients were treated with FAs (mean age: 23.29 ± 4.21 years) and 30 with CAs (mean age: 24.35 ± 4.68 years). No significant differences were found in treatment duration between groups; non-extraction treatments averaged 2.47 ± 0.73 years for FAs and 2.21 ± 0.74 years for CAs ( p = 0.168), while extraction treatments lasted 3.27 ± 0.85 years with FAs and 2.95 ± 0.94 years with CAs ( p = 0.173). No significant differences were found in baseline characteristics, including age, gender, treatment duration, skeletal and dental characteristics, and ABO discrepancy index scores ( p ≥ 0.05) (Table ). All patients presented with skeletal Class I malocclusion, moderate crowding, and asymptomatic TMJs at baseline, as determined by clinical and radiological evaluation. The intra- and inter-observer reliability analyses of all measurements showed high reliability using 20% of the total sample (Supplementary material 4). Intra-group comparisons Tables and compare TMJ measurements at two time points (T0 and T1) within the non-extraction and extraction groups (FAs and CAs), respectively. In the non-extraction FAs group, Table , the anteroposterior condylar position increased significantly (from 6.11 to 6.49 mm, p = 0.000, effect size = 0.210). The vertical condylar position decreased significantly (from 2.32 to 2.13 mm, p = 0.025, effect size = 0.150). The vertical condylar inclination decreased (from 58.15 to 56.16°, p = 0.022, effect size = 0.312). The medial condylar inclination increased significantly (from 8.90 to 10.07°, p = 0.034, effect size = 0.251). In the extraction FAs group, Table , significant changes were noted, where the anteroposterior condylar position increased significantly (from 6.58 to 7.08 mm, p = 0.000, effect size = 0.261). The anteroposterior condylar joint position demonstrated a significant decrease (from 2.39 to − 11.78 mm, p = 0.000, effect size = 0.861), and the anterior joint space increased significantly (from 2.57 to 2.85 mm, p = 0.027, effect size = 0.454). In contrast, the posterior joint space decreased significantly (from 2.68 to 2.36 mm, p = 0.021, effect size = 0.619). Inter-group comparisons Table presents an inter-group comparisons of TMJ measurements (T0-T1) for both non-extraction and extraction groups, highlighting significant differences between the FAs and CAs groups. Negative mean difference values (T0 minus T1) indicate an increase in the respective measurements post-treatment. In the extraction group, FAs demonstrated significant increases in anteroposterior condylar position ( p = 0.014, effect size = 0.657) and anteroposterior condylar joint position ( p = 0.046, effect size = 0. 0.525), indicating greater condylar remodeling compared to CAs. Figure illustrates changes in the anteroposterior ratio of condyle positioning across treatment groups, as calculated using the Pullinger formula. In the non-extraction group, FAs experienced a slight increase in posterior position (PP), a rise in centric position (CP), and a decrease in anterior position (AP). On the other hand, CAs saw stable PP, increased CP, and reduced AP. The extraction group revealed more distinct changes: FAs showed a significant increase in PP, a decrease in CP, and a reduction in AP; CAs had stable PP, increased CP, and decreased AP. Overall, the data suggest a general increase in CP and a decrease in AP across both groups, with a notable increase in PP, particularly in FAs of the extraction group. In the non-extraction group, 30 patients received FAs (mean age: 22.21 ± 5.20 years) and 30 received CAs (mean age: 24.27 ± 4.27 years). In the extraction group, 30 patients were treated with FAs (mean age: 23.29 ± 4.21 years) and 30 with CAs (mean age: 24.35 ± 4.68 years). No significant differences were found in treatment duration between groups; non-extraction treatments averaged 2.47 ± 0.73 years for FAs and 2.21 ± 0.74 years for CAs ( p = 0.168), while extraction treatments lasted 3.27 ± 0.85 years with FAs and 2.95 ± 0.94 years with CAs ( p = 0.173). No significant differences were found in baseline characteristics, including age, gender, treatment duration, skeletal and dental characteristics, and ABO discrepancy index scores ( p ≥ 0.05) (Table ). All patients presented with skeletal Class I malocclusion, moderate crowding, and asymptomatic TMJs at baseline, as determined by clinical and radiological evaluation. The intra- and inter-observer reliability analyses of all measurements showed high reliability using 20% of the total sample (Supplementary material 4). Tables and compare TMJ measurements at two time points (T0 and T1) within the non-extraction and extraction groups (FAs and CAs), respectively. In the non-extraction FAs group, Table , the anteroposterior condylar position increased significantly (from 6.11 to 6.49 mm, p = 0.000, effect size = 0.210). The vertical condylar position decreased significantly (from 2.32 to 2.13 mm, p = 0.025, effect size = 0.150). The vertical condylar inclination decreased (from 58.15 to 56.16°, p = 0.022, effect size = 0.312). The medial condylar inclination increased significantly (from 8.90 to 10.07°, p = 0.034, effect size = 0.251). In the extraction FAs group, Table , significant changes were noted, where the anteroposterior condylar position increased significantly (from 6.58 to 7.08 mm, p = 0.000, effect size = 0.261). The anteroposterior condylar joint position demonstrated a significant decrease (from 2.39 to − 11.78 mm, p = 0.000, effect size = 0.861), and the anterior joint space increased significantly (from 2.57 to 2.85 mm, p = 0.027, effect size = 0.454). In contrast, the posterior joint space decreased significantly (from 2.68 to 2.36 mm, p = 0.021, effect size = 0.619). Table presents an inter-group comparisons of TMJ measurements (T0-T1) for both non-extraction and extraction groups, highlighting significant differences between the FAs and CAs groups. Negative mean difference values (T0 minus T1) indicate an increase in the respective measurements post-treatment. In the extraction group, FAs demonstrated significant increases in anteroposterior condylar position ( p = 0.014, effect size = 0.657) and anteroposterior condylar joint position ( p = 0.046, effect size = 0. 0.525), indicating greater condylar remodeling compared to CAs. Figure illustrates changes in the anteroposterior ratio of condyle positioning across treatment groups, as calculated using the Pullinger formula. In the non-extraction group, FAs experienced a slight increase in posterior position (PP), a rise in centric position (CP), and a decrease in anterior position (AP). On the other hand, CAs saw stable PP, increased CP, and reduced AP. The extraction group revealed more distinct changes: FAs showed a significant increase in PP, a decrease in CP, and a reduction in AP; CAs had stable PP, increased CP, and decreased AP. Overall, the data suggest a general increase in CP and a decrease in AP across both groups, with a notable increase in PP, particularly in FAs of the extraction group. TMJ pretreatment values can assess changes and evaluate treatment outcomes after orthodontic or orthognathic procedures in adults. Detailed 3D measurements of TMJ structures help understand morphological and/or pathological alterations . The strong correlation between intra- and inter-observer reliability confirms the precision of CBCT in identifying landmarks, making it superior for evaluating osseous structures in the TMJ region, unmatched by conventional methods . This study is the first to use CBCT to evaluate TMJ changes in 3D before and after FAs and CAs, with and without premolars extractions, in adults with skeletal Class I malocclusion. The results offer valuable guidance for planning orthodontic treatment in patients without symptoms or those with mild TMD. The study's sample selection criteria standardized variables influencing outcomes. Participants, all with skelelal Class I malocclusion, followed the same treatment protocol using either non-extraction or four first premolar extractions protocols. The same orthodontic appliances (FAs and CAs) and mechanics were used. This strict focus ensured comparability in initial severity and minimized bias. Both groups had almost identical baseline characteristics and underwent treatment for the same duration. TMJ comparisons in non-extraction group In our study, both FAs and CAs groups, with almost similar baseline characteristics and treatment durations, use non-extraction methods to address moderate crowding. This involved distalization of posterior teeth and expansion of the dental arches to maintain a Class I relationship. Frequently, this required the extraction of upper third molars to allow the required and adequate movement of the first and second molars . Our study showed that FAs significantly impact TMJ dynamics, with notable increases in the anteroposterior condylar position and decreases in the vertical condylar position and inclination. This suggests TMJ adaptation due to changes in occlusal forces and masticatory muscle activity following the orthodontic treatment. The increase in medial condylar inclination indicates condylar remodeling in response to the altered mechanical environment by FAs. These findings highlight the importance of monitoring TMJ health during and after orthodontic treatment to manage TMDs and ensure stable outcomes, emphasizing the biomechanical effects of orthodontic appliances on TMJ dynamics. Conversely, no significant changes were observed in patients treated with CAs, affirming their efficacy in maintaining TMJ measured values. Our findings align with the hypothesis that the maxillary incisors can constrain the mandible, pushing it posteriorly due to the maxillary dentoalveolar complex's inability to move anteriorly . Proper evaluation and adjustment of the sagittal positions of the first molars are essential for maintaining optimal vertical dimension and functional occlusion . Additionally, muscle activity can influence condylar positioning, with lighter muscle contractions potentially positioning the condyle more inferiorly . The neuromuscular system may adjust the condyle position downward in response to occlusal forces to optimize occlusal contact . However, previous research indicates that the condylar position remains stable during orthodontic treatment, with no significant differences between extraction and non-extraction conditions . These findings underscore the complex interplay between dental positioning, muscle activity, and condylar positioning in achieving optimal orthodontic outcomes. The study examined condylar position changes using two methods: basal craniofacial reference planes and the Pullinger formula . The Pullinger formula revealed a shift towards a more centric condylar position in both FAs and CAs groups post-treatment, indicating adaptive TMJ repositioning. Both groups showed an increase in the centric position and a decrease in the anterior position over time. The posterior position (PP) increased slightly in FAs but remained constant in CAs. The first method showed no statistical differences post-treatment in CAs cases. Changes in the condylar position determined by the Pullinger formula were also reflected using basal reference planes (MSP, HP, and VP), supporting the adaptive repositioning theory of the TMJ after orthodontic treatment . The findings showed no significant differences in TMJ spaces and volumetric joint space in the CAs group. The correction in the CAs group is due to aligner technology, which enables precise 3D tooth movement by encircling the tooth crown and applying correction forces. This technology allows simultaneous correction of both the maxilla and mandible, enhancing treatment efficiency and stability during the maintenance stage . TMJ comparisons in extraction group In Class I malocclusions, premolars are extracted to address tooth and arch length discrepancies and reduce anterior teeth protrusion. The extraction space is used to alleviate crowding and retract anterior teeth while preserving the position of the posterior teeth through effective anchorage, which typically does not change the vertical dimension . Our findings indicated a significant increase in vertical dimension in the FAs group. Some argue that premolar extractions can decrease the vertical dimension of occlusion, leading to overclosure of the mandible, muscle foreshortening, and potentially resulting in TMDs . However, our results challenge this perspective, demonstrating that vertical dimension can increase despite extractions, possibly due to molar extrusion and other contributing factors. Our findings indicated significant changes in TMJ parameters in the FAs group, including a notable increase in the anteroposterior condylar position and an increase in the anterior joint space, while the posterior joint space decreased. Furthermore, the anteroposterior condylar joint position exhibited significant posterior shifts. These alterations may result from condylar rotation due to changes in the vertical dimension of occlusion and remodeling of the articular surfaces. TMJ can adaptively remodel, with similar condyle displacements observed during static clenching . Orthodontic treatment triggers neuromuscular and skeletal adaptations, with dysfunction occurring only if changes surpass the patient’s adaptive threshold. Studies by Wyatt et al. and Ali et al. observed that maxillary anterior teeth retraction and premolar extractions can influence condyle position, making it more concentric post-treatment. Carlton and Nanda found that premolar extractions significantly altered anterior and posterior joint spaces, particularly in Class II, division 1 malocclusion cases. FAs significantly impacted the anteroposterior condylar position more than CAs, increasing anterior joint space and decreasing posterior joint space. These findings highlight the need to consider TMJ parameters in orthodontic planning to avoid potential TMDs. The study suggests that CAs mitigate common side effects of other orthodontic appliances and offer better control over TMJ parameters due to their design, encompassing teeth on all surfaces and applying appropriate forces through digitally planned attachments . The anteroposterior shifts and increased vertical dimensions highlight the need for precise control during treatment to prevent maladaptive outcomes. These findings reinforce the importance of ongoing TMJ assessment during and after orthodontic treatment to possibly reduce risks of developing TMDs, especially in extraction cases. CAs are a favorable choice for patients prioritizing TMJ health, especially with joint sensitivities. Individualized appliance selection is crucial, and future research should examine whether TMJ remodeling enhances functionality or risks dysfunction. This study provides insights into TMJ adaptations following orthodontic treatment but has certain limitations. The retrospective design limits control over variables such as patient compliance and precludes direct assessment of occlusal force distribution, which may have influenced the findings. To minimize such biases, future studies should consider randomized controlled trials or match patients based on their preferences to more accurately analyze the effects of different treatment modalities, free from external biases. Additionally, incorporating patient-reported outcomes, such as comfort, satisfaction, and quality of life, alongside clinical outcomes, would provide a more holistic understanding of treatment impacts. The focus on adult patients with Class I malocclusion restricts generalizability of the findings to other populations, such as adolescents or individuals with different malocclusions. Furthermore, the lack of long-term follow-up prevents evaluation of the stability of TMJ changes and their potential impact on the development of TMD risk. Future studies should incorporate diverse patient populations, longitudinal follow-up to assess the stability and progression of TMJ changes, and advanced imaging modalities such as magnetic resonance imaging (MRI) to assess soft tissue adaptations, including disc positioning and joint lubrication dynamics. Including occlusal and functional parameters, such as mastication patterns or occlusal force distribution would also enhance the understanding of TMJ responses to orthodontic treatment. In our study, both FAs and CAs groups, with almost similar baseline characteristics and treatment durations, use non-extraction methods to address moderate crowding. This involved distalization of posterior teeth and expansion of the dental arches to maintain a Class I relationship. Frequently, this required the extraction of upper third molars to allow the required and adequate movement of the first and second molars . Our study showed that FAs significantly impact TMJ dynamics, with notable increases in the anteroposterior condylar position and decreases in the vertical condylar position and inclination. This suggests TMJ adaptation due to changes in occlusal forces and masticatory muscle activity following the orthodontic treatment. The increase in medial condylar inclination indicates condylar remodeling in response to the altered mechanical environment by FAs. These findings highlight the importance of monitoring TMJ health during and after orthodontic treatment to manage TMDs and ensure stable outcomes, emphasizing the biomechanical effects of orthodontic appliances on TMJ dynamics. Conversely, no significant changes were observed in patients treated with CAs, affirming their efficacy in maintaining TMJ measured values. Our findings align with the hypothesis that the maxillary incisors can constrain the mandible, pushing it posteriorly due to the maxillary dentoalveolar complex's inability to move anteriorly . Proper evaluation and adjustment of the sagittal positions of the first molars are essential for maintaining optimal vertical dimension and functional occlusion . Additionally, muscle activity can influence condylar positioning, with lighter muscle contractions potentially positioning the condyle more inferiorly . The neuromuscular system may adjust the condyle position downward in response to occlusal forces to optimize occlusal contact . However, previous research indicates that the condylar position remains stable during orthodontic treatment, with no significant differences between extraction and non-extraction conditions . These findings underscore the complex interplay between dental positioning, muscle activity, and condylar positioning in achieving optimal orthodontic outcomes. The study examined condylar position changes using two methods: basal craniofacial reference planes and the Pullinger formula . The Pullinger formula revealed a shift towards a more centric condylar position in both FAs and CAs groups post-treatment, indicating adaptive TMJ repositioning. Both groups showed an increase in the centric position and a decrease in the anterior position over time. The posterior position (PP) increased slightly in FAs but remained constant in CAs. The first method showed no statistical differences post-treatment in CAs cases. Changes in the condylar position determined by the Pullinger formula were also reflected using basal reference planes (MSP, HP, and VP), supporting the adaptive repositioning theory of the TMJ after orthodontic treatment . The findings showed no significant differences in TMJ spaces and volumetric joint space in the CAs group. The correction in the CAs group is due to aligner technology, which enables precise 3D tooth movement by encircling the tooth crown and applying correction forces. This technology allows simultaneous correction of both the maxilla and mandible, enhancing treatment efficiency and stability during the maintenance stage . In Class I malocclusions, premolars are extracted to address tooth and arch length discrepancies and reduce anterior teeth protrusion. The extraction space is used to alleviate crowding and retract anterior teeth while preserving the position of the posterior teeth through effective anchorage, which typically does not change the vertical dimension . Our findings indicated a significant increase in vertical dimension in the FAs group. Some argue that premolar extractions can decrease the vertical dimension of occlusion, leading to overclosure of the mandible, muscle foreshortening, and potentially resulting in TMDs . However, our results challenge this perspective, demonstrating that vertical dimension can increase despite extractions, possibly due to molar extrusion and other contributing factors. Our findings indicated significant changes in TMJ parameters in the FAs group, including a notable increase in the anteroposterior condylar position and an increase in the anterior joint space, while the posterior joint space decreased. Furthermore, the anteroposterior condylar joint position exhibited significant posterior shifts. These alterations may result from condylar rotation due to changes in the vertical dimension of occlusion and remodeling of the articular surfaces. TMJ can adaptively remodel, with similar condyle displacements observed during static clenching . Orthodontic treatment triggers neuromuscular and skeletal adaptations, with dysfunction occurring only if changes surpass the patient’s adaptive threshold. Studies by Wyatt et al. and Ali et al. observed that maxillary anterior teeth retraction and premolar extractions can influence condyle position, making it more concentric post-treatment. Carlton and Nanda found that premolar extractions significantly altered anterior and posterior joint spaces, particularly in Class II, division 1 malocclusion cases. FAs significantly impacted the anteroposterior condylar position more than CAs, increasing anterior joint space and decreasing posterior joint space. These findings highlight the need to consider TMJ parameters in orthodontic planning to avoid potential TMDs. The study suggests that CAs mitigate common side effects of other orthodontic appliances and offer better control over TMJ parameters due to their design, encompassing teeth on all surfaces and applying appropriate forces through digitally planned attachments . The anteroposterior shifts and increased vertical dimensions highlight the need for precise control during treatment to prevent maladaptive outcomes. These findings reinforce the importance of ongoing TMJ assessment during and after orthodontic treatment to possibly reduce risks of developing TMDs, especially in extraction cases. CAs are a favorable choice for patients prioritizing TMJ health, especially with joint sensitivities. Individualized appliance selection is crucial, and future research should examine whether TMJ remodeling enhances functionality or risks dysfunction. This study provides insights into TMJ adaptations following orthodontic treatment but has certain limitations. The retrospective design limits control over variables such as patient compliance and precludes direct assessment of occlusal force distribution, which may have influenced the findings. To minimize such biases, future studies should consider randomized controlled trials or match patients based on their preferences to more accurately analyze the effects of different treatment modalities, free from external biases. Additionally, incorporating patient-reported outcomes, such as comfort, satisfaction, and quality of life, alongside clinical outcomes, would provide a more holistic understanding of treatment impacts. The focus on adult patients with Class I malocclusion restricts generalizability of the findings to other populations, such as adolescents or individuals with different malocclusions. Furthermore, the lack of long-term follow-up prevents evaluation of the stability of TMJ changes and their potential impact on the development of TMD risk. Future studies should incorporate diverse patient populations, longitudinal follow-up to assess the stability and progression of TMJ changes, and advanced imaging modalities such as magnetic resonance imaging (MRI) to assess soft tissue adaptations, including disc positioning and joint lubrication dynamics. Including occlusal and functional parameters, such as mastication patterns or occlusal force distribution would also enhance the understanding of TMJ responses to orthodontic treatment. Within the constraints of this study, the following conclusions were reached: Compared to CAs, FAs exhibited better control of the vertical dimension. The FAs group had significantly greater clockwise mandibular rotation than the CAs group. FAs, particularly in extraction cases, appeared to influence condylar positions and joint spaces more significantly. CAs were effective correction of skeletal Class I malocclusion with minimal impact on TMJ parameters, suggesting their suitability for cases where TMJ stability is a priority. These findings highlight the importance of tailoring orthodontic appliance selection to individual patient needs and TMJ health. Specifically, the results may inform the appliance choice based on the unique characteristics of each malocclusion. Regular monitoring TMJ parameters throughout treatment remains essential to optimize outcomes and minimize the risk of developing TMDs. Additional file 1. |
Preparedness for care transitions to home and acute care use of skilled nursing facility patients | 20ebc3f0-bc09-4f23-af7d-d57fa467117f | 11895266 | Community Health Services[mh] | Preparing older adults and their caregivers for care transitions is a global health concern [ – ]. In the U.S., care transitions are especially complex for the 1.5 million older adults per year who are admitted to a hospital, receive rehabilitative care over two-four weeks in a skilled nursing facility (SNF), and transfer again, to home and other settings of care . Helping families navigate these care transitions, while achieving goals of safety and patient-centered care, is complicated by the intensity of patient needs, and the fragmentation of health systems across settings of care. SNF patients are a population with complex health challenges . They are typically older than 75, have recent acute illness or injury (e.g., hip fracture, pulmonary infections) , and incurable chronic illnesses (e.g., heart failure and Alzheimer’s disease and related dementias) [ , , ]. They also experience fatigue related to hospital and SNF care , limitation in mobility and function [ – ], and dependence on caregivers for activities of daily living . Before SNF discharge, SNF patients and their caregivers (usually a spouse or adult child) participate in “discharge planning” or “transitional care” to prepare for care transitions from SNFs to home, assisted living and other destinations [ , – ]. They must identify safety needs and learn skills to manage medications, monitor for symptoms of recurring illness, and coordinate care with outpatient and community partners . After SNF discharge, 22% of patients are readmitted to a hospital within 30 days of discharge . Thus, research is needed to develop new tools and services for improving care transitions of SNFs patients and their caregivers . Measuring preparation for care transitions is essential for describing the quality of SNF discharge planning and transitional care and evaluating new services to improve outcomes after discharge . Preparedness for care transitions is defined as patient and caregiver perceptions of: feeling cared for by healthcare providers , having the right information to manage care , having confidence that providers communicate with each other, and feeling empowerment to assert preferences . Findings in qualitative studies indicate that SNF patients and caregivers report limited preparation for care transitions, and their consequent struggle to continue care without adequate information and support [ , , , , ]. Yet, larger studies of preparedness for care transitions in SNFs, using surveys with established psychometrics, have not been undertaken, thereby, limiting the evidence-base for designing discharge planning and transitional care to meet patient and caregiver needs . In hospital-based research, a commonly reported measure of preparedness for care transitions is the Care Transitions Measure-15 (CTM-15) and the abbreviated Care Transitions Measure-3 (CTM-3) . Prior studies indicate the high internal consistency and reliability of CTM-15 and CTM-3 for describing the quality of discharge planning and transitional care [ – ]. Earlier studies have established the feasibility of using the CTM-15 to evaluate outcomes after SNF discharge . The validity of the CTM-15 and CTM-3 was evaluated in hospital-based studies, which show mixed findings of the association of CTM-3 or CTM-15 scores and the rate of acute care use after hospital discharge. For example, two studies found 10-point increases in CTM-15 or CTM-3 scores were associated with 12–14% lower odds of hospital readmission , while two others, including one large clinical trial, found that patient and hospital factors may influence the relationship between preparedness for discharge and acute care use and that there was no relationship between CTM-15 scores and emergency department and hospital admission . However, using the CTM-15 in SNF-based research is rare and the potential of this measure to evaluate the quality of care is poorly understood. The objective of this study was to conduct a secondary analysis of data from the Connect-Home clinical trial (described below) , and to describe the relationship between patient- or caregiver-reported preparedness for care transitions (measured with the CTM-15) and acute care use in 30 days after discharge from a SNF. Earlier studies indicated that SNF sociodemographic factors including patient age, race, and income are associated with acute care use after SNF discharge ; thus, the secondary objective was to explore these sociodemographic factors along with others including caregiver type, and neighborhood deprivation on the relationship between CTM-15 scores and acute care use.
The design was a secondary analysis of baseline and outcomes data that were collected as part the Connect-Home efficacy study . All study procedures were approved by the University of North Carolina Institutional Review Board. Original study The original study of Connect-Home transitional care was a stepped wedge, cluster randomized trial testing whether pre-discharge support in the SNF and post-discharge support in the patient’s home improved preparedness for care transitions, the patient’s acute care use, and the caregiver’s experiences in the caregiving role . The study evaluated the impact of the Connect-Home transitional care intervention over 30 months on the primary outcome of preparedness for care transitions (measured with the CTM-15), and secondary outcomes, such as patient functional mobility, quality of life and acute care use and caregiver burden and distress. The study was conducted in 6 U.S. SNFs located in North Carolina. Research staff, with standardized instrument and specialized training, recruited SNF patients with serious illness (e.g., end-stage kidney disease) and their caregivers (spouse, adult child or other) between March 2019 and July 2021. Patient and caregiver dyads were eligible if: the patient spoke English, had a serious medical condition, required at least 25% assistance with mobility at SNF admission, and had a caregiver who was willing to participate, and the caregiver spoke English, and assisted the patient at home. A legally authorized representative (LAR) was recruited as a proxy for SNF patients with cognitive impairment. The COVID pandemic occurred in the middle of the Connect-Home trial; as part of national, mandated risk mitigation efforts, the study was paused for six months. After the study pause, the study was re-started using an IRB-approved revised protocol. The data for this analysis were collected by research staff in face to face or in telephone interviews with SNF patients and their caregivers. Data source for the secondary analysis We used baseline and outcomes data collected in 7 and 30 days after the patient discharged from the SNF to home or other destinations. Patient baseline data was collected in a chart review of the SNF medical records system and included patient clinical characteristics, such as diagnosis category, Charlson Co-morbidity Index score , Brief Inventory of Mental Status score , and sociodemographic characteristics, such as age, sex, and neighborhood disadvantage. Reporting race in this study is consistent with the National Institute of Health Inclusion of Women, Minorities, and Children policy . Race of patients included in this study were categorized as White or Black. Neighborhood disadvantage was described with the Area Deprivation Index (ADI) , which is based on U.S. Zip code and is a measure of factors, such as the concentration of poverty, contributing to socio-economic disadvantage in U.S. neighborhoods; higher ADI scores are associated with hospital readmission . Caregiver baseline data were collected via telephone and included nonclinical characteristics, such as relationship to the patient and whether the caregiver resided in the same home as the patient. The outcome variable used in this secondary analysis was the number of days of acute care use in 30 days after discharge from the SNF . The primary predictor variable was preparedness for care transitions (measured with the CTM-15) in 7 days after SNF discharge. The characteristics of the SNFs were obtained using a standardized survey administered with the SNF director of nursing or nursing home administrator. Data were included for all subjects with observed data for both preparedness for care transitions and acute care use. Care transitions measure-15 and acute care use Patient or caregiver-proxy reported preparedness for care transitions was measured with the CTM-15 in 7 days after SNF discharge . The CTM-15, a previously published measure, is a 15-item, Likert-scaled instrument with five anchors, including “Strongly Disagree,” “Disagree,” “Agree,” “Strongly agree,” and “Not applicable/don’t know.” The CTM-15 was designed for patient or caregiver responses. It focuses on four domains, including understanding of medications, a written record of discharge instructions, timely follow-up after discharge, and the ability to recognize challenges in health. While the CTM-15 was originally designed to study preparedness for hospital to home transitions , it was used in the parent study to study the impact of preparedness after SNF discharge, when care at home is complex and involves coordination with community providers . In the parent study, caregivers provided CTM-15 data for patients with cognitive impairment. To calculate CTM-15 scores, means are calculated for measure items and then a linear transformation is used to generate CTM-15 scores between 0 and 100, with higher scores indicating better preparation for care transitions . Acute care use, self-reported by SNF patients or caregivers in the role of proxy, was the count of days in an emergency department and the hospital in 30 days after SNF discharge. For subset of 17 patients, the patient was readmitted to a hospital before the 7-day data collection call; thus, for these patients, acute care use data were collected in the 7-day call. These data were included in the acute care use outcome . Analysis Descriptive statistics, mean and standard deviation (SD) for continuous variables and frequencies for categorical variables, summarized the background characteristics of the study participants. The mean acute care use and CTM-15 scores (SD) were calculated overall and for the patient subgroups defined based on the five sociodemographic characteristics of interest: sex (male, female), race (Black, White), Charlson total score, Area Deprivation Index Score (ADI, potential range, 1-100), caregiver relationship (adult child, spouse, other family, or non-relative). Charlson total score and ADI were continuous variables for which statistical evaluations were made at their quantiles, i.e., the three values corresponding to its observed 25th, 50th and 75th percentiles. Spearman rank correlation (ρ) statistics for the bivariate association of acute care use and CTM-15 score were calculated overall and for each of the fifteen subgroups/evaluations; negative correlation values were expected as it was hypothesized that greater preparedness would be associated with less acute care use. To assess the overall exposure effect of CTM-15 on acute care use days adjusting for background characteristics, we use two-part marginalized zero-inflated negative binomial (MZINB) regression, which models the overall mean count outcome (with a log link function), instead of standard ZINB models that model the mean count of a hypothetical latent (unobserved) ‘at-risk’ sub-population of SNF residents . In the initial stage of our analysis, we fit an MZINB model to estimate the overall association between CTM-15 (defined as a 10 unit change) and the number of acute care use days at the 30 days post-discharge call, with adjustment only for the design variables of the parent clinical trial: the treatment indicator (intervention vs. control condition), an indicator for the onset of the COVID pandemic (pre- vs. post-onset), and their interaction in the mean part of MZINB model. The zero-inflation logistic regression part of the MZINB model includes CTM-15 scores with main effects of treatment and COVID onset indicators as covariates. The primary output of our analysis is the incident rate ratio (IRR) for the association of a 10-unit change in CTM-15 and the number of acute care days (and its 95% confidence interval), which is the ratio of the expected mean number of acute care days following a 10 unit reduction in CTM-15 to the mean number of acute care days for the referent level of CTM-15 (before the 10 unit reduction). Because the mean part of the MZINB model employs a log link function, the β-coefficient corresponding to CTM-15 is the log IRR; exponentiation of it and the bounds of its 95% CI gives the estimate of the IRR and its 95% CI for the association of CTM-15 and acute care use days. Aligned with earlier research , we examined the influence of patient factors that may influence preparedness and acute care use. Thus, following the initial stage analysis, our primary set of analyses estimates the association between CTM-15 and the number of acute care use days at the 30 days after discharge according to the levels of the five baseline characteristics. The baseline variables are evaluated individually by including them in their own MZINB model. Each MZINB model includes all covariates from the initial stage of our analysis plus the main effect of the baseline variable and its interaction with preparedness score. While the analyses produce a p-value for the interaction effect to assess effect modification for each baseline variable, the focus of our exploratory analysis is to produce IRR estimates (and 95% CIs) of CTM-15 and acute care use days for each level (or quartile) of the baseline variables, which addresses a different set of hypotheses.
The original study of Connect-Home transitional care was a stepped wedge, cluster randomized trial testing whether pre-discharge support in the SNF and post-discharge support in the patient’s home improved preparedness for care transitions, the patient’s acute care use, and the caregiver’s experiences in the caregiving role . The study evaluated the impact of the Connect-Home transitional care intervention over 30 months on the primary outcome of preparedness for care transitions (measured with the CTM-15), and secondary outcomes, such as patient functional mobility, quality of life and acute care use and caregiver burden and distress. The study was conducted in 6 U.S. SNFs located in North Carolina. Research staff, with standardized instrument and specialized training, recruited SNF patients with serious illness (e.g., end-stage kidney disease) and their caregivers (spouse, adult child or other) between March 2019 and July 2021. Patient and caregiver dyads were eligible if: the patient spoke English, had a serious medical condition, required at least 25% assistance with mobility at SNF admission, and had a caregiver who was willing to participate, and the caregiver spoke English, and assisted the patient at home. A legally authorized representative (LAR) was recruited as a proxy for SNF patients with cognitive impairment. The COVID pandemic occurred in the middle of the Connect-Home trial; as part of national, mandated risk mitigation efforts, the study was paused for six months. After the study pause, the study was re-started using an IRB-approved revised protocol. The data for this analysis were collected by research staff in face to face or in telephone interviews with SNF patients and their caregivers.
We used baseline and outcomes data collected in 7 and 30 days after the patient discharged from the SNF to home or other destinations. Patient baseline data was collected in a chart review of the SNF medical records system and included patient clinical characteristics, such as diagnosis category, Charlson Co-morbidity Index score , Brief Inventory of Mental Status score , and sociodemographic characteristics, such as age, sex, and neighborhood disadvantage. Reporting race in this study is consistent with the National Institute of Health Inclusion of Women, Minorities, and Children policy . Race of patients included in this study were categorized as White or Black. Neighborhood disadvantage was described with the Area Deprivation Index (ADI) , which is based on U.S. Zip code and is a measure of factors, such as the concentration of poverty, contributing to socio-economic disadvantage in U.S. neighborhoods; higher ADI scores are associated with hospital readmission . Caregiver baseline data were collected via telephone and included nonclinical characteristics, such as relationship to the patient and whether the caregiver resided in the same home as the patient. The outcome variable used in this secondary analysis was the number of days of acute care use in 30 days after discharge from the SNF . The primary predictor variable was preparedness for care transitions (measured with the CTM-15) in 7 days after SNF discharge. The characteristics of the SNFs were obtained using a standardized survey administered with the SNF director of nursing or nursing home administrator. Data were included for all subjects with observed data for both preparedness for care transitions and acute care use.
Patient or caregiver-proxy reported preparedness for care transitions was measured with the CTM-15 in 7 days after SNF discharge . The CTM-15, a previously published measure, is a 15-item, Likert-scaled instrument with five anchors, including “Strongly Disagree,” “Disagree,” “Agree,” “Strongly agree,” and “Not applicable/don’t know.” The CTM-15 was designed for patient or caregiver responses. It focuses on four domains, including understanding of medications, a written record of discharge instructions, timely follow-up after discharge, and the ability to recognize challenges in health. While the CTM-15 was originally designed to study preparedness for hospital to home transitions , it was used in the parent study to study the impact of preparedness after SNF discharge, when care at home is complex and involves coordination with community providers . In the parent study, caregivers provided CTM-15 data for patients with cognitive impairment. To calculate CTM-15 scores, means are calculated for measure items and then a linear transformation is used to generate CTM-15 scores between 0 and 100, with higher scores indicating better preparation for care transitions . Acute care use, self-reported by SNF patients or caregivers in the role of proxy, was the count of days in an emergency department and the hospital in 30 days after SNF discharge. For subset of 17 patients, the patient was readmitted to a hospital before the 7-day data collection call; thus, for these patients, acute care use data were collected in the 7-day call. These data were included in the acute care use outcome .
Descriptive statistics, mean and standard deviation (SD) for continuous variables and frequencies for categorical variables, summarized the background characteristics of the study participants. The mean acute care use and CTM-15 scores (SD) were calculated overall and for the patient subgroups defined based on the five sociodemographic characteristics of interest: sex (male, female), race (Black, White), Charlson total score, Area Deprivation Index Score (ADI, potential range, 1-100), caregiver relationship (adult child, spouse, other family, or non-relative). Charlson total score and ADI were continuous variables for which statistical evaluations were made at their quantiles, i.e., the three values corresponding to its observed 25th, 50th and 75th percentiles. Spearman rank correlation (ρ) statistics for the bivariate association of acute care use and CTM-15 score were calculated overall and for each of the fifteen subgroups/evaluations; negative correlation values were expected as it was hypothesized that greater preparedness would be associated with less acute care use. To assess the overall exposure effect of CTM-15 on acute care use days adjusting for background characteristics, we use two-part marginalized zero-inflated negative binomial (MZINB) regression, which models the overall mean count outcome (with a log link function), instead of standard ZINB models that model the mean count of a hypothetical latent (unobserved) ‘at-risk’ sub-population of SNF residents . In the initial stage of our analysis, we fit an MZINB model to estimate the overall association between CTM-15 (defined as a 10 unit change) and the number of acute care use days at the 30 days post-discharge call, with adjustment only for the design variables of the parent clinical trial: the treatment indicator (intervention vs. control condition), an indicator for the onset of the COVID pandemic (pre- vs. post-onset), and their interaction in the mean part of MZINB model. The zero-inflation logistic regression part of the MZINB model includes CTM-15 scores with main effects of treatment and COVID onset indicators as covariates. The primary output of our analysis is the incident rate ratio (IRR) for the association of a 10-unit change in CTM-15 and the number of acute care days (and its 95% confidence interval), which is the ratio of the expected mean number of acute care days following a 10 unit reduction in CTM-15 to the mean number of acute care days for the referent level of CTM-15 (before the 10 unit reduction). Because the mean part of the MZINB model employs a log link function, the β-coefficient corresponding to CTM-15 is the log IRR; exponentiation of it and the bounds of its 95% CI gives the estimate of the IRR and its 95% CI for the association of CTM-15 and acute care use days. Aligned with earlier research , we examined the influence of patient factors that may influence preparedness and acute care use. Thus, following the initial stage analysis, our primary set of analyses estimates the association between CTM-15 and the number of acute care use days at the 30 days after discharge according to the levels of the five baseline characteristics. The baseline variables are evaluated individually by including them in their own MZINB model. Each MZINB model includes all covariates from the initial stage of our analysis plus the main effect of the baseline variable and its interaction with preparedness score. While the analyses produce a p-value for the interaction effect to assess effect modification for each baseline variable, the focus of our exploratory analysis is to produce IRR estimates (and 95% CIs) of CTM-15 and acute care use days for each level (or quartile) of the baseline variables, which addresses a different set of hypotheses.
Of 327 dyads enrolled in the Connect-Home study, 249 patients (76.1%) had non-missing CTM-15 and acute care use data in 30 days. Among the 249 patients included in this analysis, 63.1% were female, 73.5% were White patients; average age was 76.3 years, and SNF length of stay was 23.9 days (Table ). After SNF care, 238 patients (95.6%) discharged to home and 11 patients (4.4%) discharge to assisted living. Among caregivers, 73.5% were female; the relationship to the patient was adult child (47.3%), spouse (22.9%) or other (29.8%); and 49.2% lived in the same home with the patient (Table ). The study SNFs were located in North Carolina, owned by a for-profit nursing home chain, had an average size of 113.5 beds, and average overall Nursing Home Compare quality rating of 2.7 out of 5 stars . Descriptive statistics for preparedness for care transitions (CTM-15) and acute care use are shown in Table . Respondents to surveys with the CTM-15 were 193 patients and 56 caregivers; the mean patient-reported CTM-15 score was 72.63 (SD = 1.67) and the mean caregiver-reported CTM-15 score was 73.84 (SD = 1.98). The overall mean CTM-15 score was 72.9 (SD = 17.52). The average total CTM-15 scores varied minimally across patient subgroups, for example, the average score of female compared to male patients was 73.56 (17.80) and 71.77 (17.09), respectively. Similarly, the average score of Black compared to White patients was 74.32 (16.54) and 72.01 (17.79). The average CTM-15 item scores varied minimally across the 15 individual scale items, with a range of average item scores of 3.02 (0.75) to 3.37 (0.69). CTM-15 score and acute care use During the 30 day follow-up, 14% of patients (35/249) had any acute care use, including 21 patients with hospital readmissions and 14 patients with emergency department visits without hospital readmission. The mean days of acute care use was 0.62 (SD = 2.58) with a range of 0–30 (see Table ). Notably, only 3 patients exceeded 7 days of acute care use (14, 15 and 30 days). The average days of acute care was greater among male vs. female patients [0.83 (3.63) and 0.50 (1.690), respectively] and White vs. Black patients [0.74 (2.94) and 0.31 (1.06), respectively]. Moreover, acute care use was also higher for patients with spousal caregivers [0.88 (2.63)] or other family caregivers [1.01 (3.97)] vs. patients with adult children as caregivers [0.27 (0.960]. As illustrated in Fig. , SNF patients with lower CTM-15 scores tended to have more acute care use in 30 days. For example, patients with CTM-15 scores of 50 or less had average acute care use of 1.4 days whereas patients with average CTM-15 scores of 70 or more had less than 0.4 days of acute care use; each of the three groups with CTM-15 < 70 in Fig. included one of the patients with acute care use greater than seven days. The negative association between CTM-15 and acute care use is also shown by the Spearman rank correlations in the subgroup analysis (Table ); among the five correlations whose absolute values exceed 0.15, four have negative signs representing an inverse relationship of CTM and acute care use. In the primary multivariable analysis, using the MZINB regression, the estimated IRR = 0.80 (95%CI: 0.60, 1.05) represented a 20% reduction in acute care use for a 10 unit increase in CTM-15 score, which was not statistically significant ( p = 0.11) at the 0.05 level (Table ).While none of the interaction terms between baseline characteristics and CTM-scores were statistically significant at the 0.05 level, we examined the influence of patient and caregiver characteristics on the relationship between CTM-15 scores and acute care use through subgroup analyses. Based on having the smallest upper bounds of their 95% confidence intervals, we observed the strongest relationships between CTM-15 and acute care use for the following subgroups: patient with male sex, White race, high Charlson total score (upper quartile of 9), and those with low or middle neighborhood deprivation. For example, for patients with male sex, the estimated IRR = 0.67 (95%CI: 0.44, 0.99) represented a 33% reduction in acute care use for a 10 unit increase in CTM-15 score ( p = 0.048). Similarly, White patients experienced a 25% reduction in acute care use, as did patients with ADI at the middle level, whereas patients with ADI at the low level had a 31% reduction, with a 10 unit increase in CTM-15. Also, patients with Charlson score at the high level had a 22% reduction. P-values in these four subgroups ranged from 0.060 to 0.069. In total, the estimated IRR was less than 1.0 for all but two of the subgroup analyses, suggesting that an increase in preparedness reduces acute care use days.
During the 30 day follow-up, 14% of patients (35/249) had any acute care use, including 21 patients with hospital readmissions and 14 patients with emergency department visits without hospital readmission. The mean days of acute care use was 0.62 (SD = 2.58) with a range of 0–30 (see Table ). Notably, only 3 patients exceeded 7 days of acute care use (14, 15 and 30 days). The average days of acute care was greater among male vs. female patients [0.83 (3.63) and 0.50 (1.690), respectively] and White vs. Black patients [0.74 (2.94) and 0.31 (1.06), respectively]. Moreover, acute care use was also higher for patients with spousal caregivers [0.88 (2.63)] or other family caregivers [1.01 (3.97)] vs. patients with adult children as caregivers [0.27 (0.960]. As illustrated in Fig. , SNF patients with lower CTM-15 scores tended to have more acute care use in 30 days. For example, patients with CTM-15 scores of 50 or less had average acute care use of 1.4 days whereas patients with average CTM-15 scores of 70 or more had less than 0.4 days of acute care use; each of the three groups with CTM-15 < 70 in Fig. included one of the patients with acute care use greater than seven days. The negative association between CTM-15 and acute care use is also shown by the Spearman rank correlations in the subgroup analysis (Table ); among the five correlations whose absolute values exceed 0.15, four have negative signs representing an inverse relationship of CTM and acute care use. In the primary multivariable analysis, using the MZINB regression, the estimated IRR = 0.80 (95%CI: 0.60, 1.05) represented a 20% reduction in acute care use for a 10 unit increase in CTM-15 score, which was not statistically significant ( p = 0.11) at the 0.05 level (Table ).While none of the interaction terms between baseline characteristics and CTM-scores were statistically significant at the 0.05 level, we examined the influence of patient and caregiver characteristics on the relationship between CTM-15 scores and acute care use through subgroup analyses. Based on having the smallest upper bounds of their 95% confidence intervals, we observed the strongest relationships between CTM-15 and acute care use for the following subgroups: patient with male sex, White race, high Charlson total score (upper quartile of 9), and those with low or middle neighborhood deprivation. For example, for patients with male sex, the estimated IRR = 0.67 (95%CI: 0.44, 0.99) represented a 33% reduction in acute care use for a 10 unit increase in CTM-15 score ( p = 0.048). Similarly, White patients experienced a 25% reduction in acute care use, as did patients with ADI at the middle level, whereas patients with ADI at the low level had a 31% reduction, with a 10 unit increase in CTM-15. Also, patients with Charlson score at the high level had a 22% reduction. P-values in these four subgroups ranged from 0.060 to 0.069. In total, the estimated IRR was less than 1.0 for all but two of the subgroup analyses, suggesting that an increase in preparedness reduces acute care use days.
This secondary data analysis of SNF patients and caregivers who participated in the Connect-Home efficacy trial shows patients had lower than expected acute care use in 30 days after SNF discharge and a 10 unit increase in CTM-15 scores (preparedness for discharge) was associated with an estimated 20% reduction in acute care use in 30 days in the overall study population; however, the confidence interval of this estimate (IRR = 0.80; 95% CI: 0.60, 1.05) included 1.0, reflecting uncertainty in this finding. In subgroup analyses, we observed a statistically significant reduction in acute care use with increasing CTM-15 scores among male patients and a non-statistically significant trend in the same pattern among White patients, those with higher Charlson scores and with less neighborhood deprivation. While our descriptive and statistical analysis provided evidence to suggest that higher CTM-15 scores (i.e., greater preparedness for care transitions) are associated with lower acute care use, the multivariable-adjusted 95% confidence intervals for incident rate ratios in the overall study sample and for twelve of thirteen subgroup regression analyses were moderately wide and included 1.0, reflecting uncertainty in our results. On the other hand, the fact that estimated IRRs were less than 1.0 in the overall sample and in eleven of thirteen of these analyses of provides favorable evidence that suggests the potential of the CTM-15 to measure the quality of SNF discharge planning and transitional care. Our ability to estimate the relationship between CTM-15 scores and acute care use with a high degree of precision was limited by the low rate of acute care use (14%) during the pandemic and a moderately small sample size ( N = 249 patients). While the CTM-15 has shown promise in this and some earlier research, our findings indicate that further evaluation in larger samples is necessary; for example, with larger sample sizes, a potentially larger number of hospital readmissions will permit more sensitive analysis of the impact of preparedness and post-discharge follow-up care on acute care use. Larger studies may be necessary before the CTM-15 can be widely used to guide improvement projects or decision making about the quality of care in SNFs. In our subgroup analysis of factors that influence the relationship between CMT-15 scores and acute care use, we found that the mean number of acute care use days of male patients decreased by 33% for a 10 unit increase in preparedness score (IRR = 0.67; 95%CI: 0.44, 0.99); we also observed trends (with p < 0.10) suggesting the influence of lower Area Deprivation Index scores, race (Black/White), and higher Charlson score on the relationship between CTM-15 scores and acute care use. These findings suggest the presence of individual and environmental factors that influence the impact of preparedness on acute medical needs in 30 days after discharge. For example, male patients may have caregivers with greater in-home availability and knowledge of the patient medical needs, which may reduce risk of acute illness or injury . More research is necessary to determine the impact of these factors on preparedness for care transitions. While we found that White patients had a 25% reduction (IRR = 0.75; 95%CI: 0.55, 1.02), our finding that the rank correlation was only − 0.08 for White patients suggests this IRR result may be highly impacted by three White patients who had 14, 15, and 30 acute care use days; thus, further research, with larger samples will be necessary to clarify the relationship of preparedness and acute care use in racial subgroups. Nonetheless, our findings align with earlier studies that indicate sex and neighborhood factors outside of preparedness for care transitions likely contribute to acute care use. Thus, reducing the risk of acute care use may require discharge planning or transitional care with a greater focus on social determinants of health (SDOHs), such as low income, lack of transportation, and limited access to insurance, social support, and quality medical care, which have been postulated to impact rates of hospital readmission . Optimizing discharge planning and care transition preparedness, particularly among vulnerable patient subgroups will inform the development of interventions designed to reduce acute care use following SNF discharge. In this study, data were collected during the COVID pandemic, which had a profound impact on care provided in SNFs, such as discharge planning, and outcomes after SNF discharge, such as acute care use. In this study with 249 patients, the rate of acute hospital transfers was 14.1%, while findings in an earlier study with more than 55,000 SNF patients indicated the rate was 21.1%) . This difference (35%) in acute care use after SNF discharge aligns with earlier hospital-based research that indicated a large decrease in acute care use during the COVID pandemic, for example, differences in the rate of pre-COVID and post-COVID hospital admissions for ambulatory care sensitive diagnoses and complications related to heart failure . This finding is significant because it underscores COVID-related factors may have contributed to the low, observed rate of acute care use in our sample. It also suggests the large impact of COVID on hospital utilization after SNF discharge more generally. Finally, the challenges we faced in detecting associations between CTM-15 scores and acute care use may also indicate the absence of care practices that are necessary to prevent acute care use. Earlier hospital-based and SNF research indicates that effective transitional care includes pre-discharge and post-discharge to support care transitions and prevent hospital readmission . In our study, SNF patients may have experienced limited post-discharge care, such as post-discharge telephone calls or home visits, which may have limited preparedness for discharge and contributed to risk for acute care use. This study was subject to several limitations. First, the Connect-Home trial was conducted during the onset of the COVID pandemic . COVID created new and frightening concerns for patients, families and staff, perhaps most importantly, uncertainty about the risk of illness and death and uncertainty about the precautions needed to prevent infection . Moreover, COVID was especially infectious in nursing homes and among older adults, thus the focus on discharge care in SNFs was likely overshadowed by concerns of infection control and haste to transfer patients from SNFs to home. Thus, the unknown impact of COVID on study outcomes (i.e., preparedness for care transition and acute care use) likely influenced findings and increased the risk of bias in the results. Second, the study setting was six SNFs located in one U.S. state and the study sample was 249 patients. However, the SNF sample included sites with diverse quality ratings, ranging from 1 to 4 based on the 1 to 5 star rating system of US nursing homes . Moreover, while the sample included 249 patients, the sample was diverse, for example, 23.6% of the sample was Black, a rate more than twice the national rate of Black patients in SNFs. Further limitations in setting and sample are that we did not account for differences in cultural background, health beliefs, or lack of financial resources that might influence acute use and we did not look at the intersectionality of sociodemographic factors, for example, the acute care use of impact of Black women living in poor neighborhoods compared to White men living in areas with better socioeconomic conditions. Third, attrition of participants in the parent study before the 30 day follow-up increases the risk of bias in study findings. Compared to patients with data collected in 7 and 30 days after SNF discharge, those for whom data collection was not possible had higher Charlson Comorbidity Index scores (7.1 vs. 8.1) and longer SNF length of stay (23.8 vs. 30.9 days) . Thus, a potential healthy volunteer bias may limit the generalizability of findings because participants who dropped out of the study potentially would have had lower preparedness and greater acute care use. Finally, the small number of residents in our study having any acute care use ( n = 35) limited the number of covariates that should be included in a regression model to justify large sample normality of estimated coefficients, i.e., log incidence density ratios. Therefore, separate subgroup analyses for each of the demographic characteristics were conducted rather than fitting a single model that included all of their main effects and interactions with CTM-15.
Preparing patients to transition home is a primary goal of SNF care and a potentially useful measure of SNF quality. The finding that preparedness for transition was negatively associated with acute care use suggests the potential of the CTM-15 to measure the quality of discharge planning and transitional care. This finding is significant because research is urgently needed to identify and evaluate innovations for improving post-discharge outcomes of SNF patients. Research is needed to reach groups of patients and caregivers with limited or no access to high quality care.
|
Practice of general pediatrics in Saudi Arabia: current status, challenges, and opportunities | 240e6eab-5531-458a-bd4d-86751cb31b4e | 9617042 | Pediatrics[mh] | Pediatrics is a specialty of medical science that is concerned with children’s physical, mental, and social health from birth to young adulthood, as defined by the American Academy of Pediatrics (AAP) . It covers a broad spectrum of health services, ranging from preventive health care to diagnosis and treatment of both acquired and genetic illnesses. Furthermore, it ensures children’s and youths’ growth and development for prospering in their society. Pediatrics is a heterogeneous field of medicine. This heterogeneity extends to include variable ages and developmental stages. Prior to establishing the modern-day field of pediatrics, families, friends, and midwives attended to infants’ and children’s needs. Physicians rarely contributed to this population’s health in the past. As medicine evolved in the 19th and early 20th centuries, there grew an interest in creating a separate field for caring for sick children. The first known hospital in the western world that was devoted entirely to caring for children was the Sick Children’s Hospital, and was established in Paris, France in 1802 . By the 1850s, greater attention was given to the value of specialized training and education to equip future pediatricians . Pediatrics is one of the first specialties founded in Saudi. Arabia . Over the last two decades, it has rapidly become a well-recognized specialty in the young Kingdom, with many subspecialties . Furthermore, it has continued to grow into a successful example for the whole region . In 1981, the Saudi Pediatrics Association (SPA) was instituted, aiming to improve healthcare services provided for children all over the country . This original study investigates general pediatric providers’ and pediatric trainees’ perspectives on the current status of, and how to improve, general pediatrics in Saudi Arabia. It proposes a qualitative methodology through personal interviews. This methodology aims to reach a representative sample of the general pediatric workforce, delivering child healthcare to discover different viewpoints on successes, challenges, and opportunities. This approach can provide new insights on how to address many unvisited territories of child healthcare in the young Kingdom, and it is the first to explore the quality of general pediatrics in the region, as far as we know. Therefore, this study is meant to serve as a bridge between general pediatricians and policymakers in order to overcome challenges in the field and invest more in successes of delivering a high quality child healthcare. Additionally, findings can voice the various new opportunities that might improve the practice of general pediatrics in the Kingdom. Study design This study adopted a qualitative approach, using video calls to practice social distancing during the COVID-19 pandemic. Interviews included 10–15 practitioners of general pediatrics in Saudi Arabia. The interviews included experienced pediatricians (each working for longer than ten years), early career pediatricians, and senior pediatric residents. Inclusion criteria were that the participants had to be a Saudi national, practicing physician, having active or educational license, and willing to consent to participate in an one-hour interview. By interviewing doctors at different experience levels and of both genders, we were able to weigh different insights and be more inclusive. All interviews were audio recorded and transcribed later. Facial expression, gestures, body language, tone, and all forms of non-verbal communications were documented to aid in data analysis. Interview questions Interview questions were all open-ended to capture a wide range of insights. The questions were developed based on literature review, practice observations, and experienced pediatricians’ recommendations. To help participants elaborate when needed, certain props were agreed on for each question, and the interviewer was not allowed to use other tools to enrich the discussion. To access our interview questions, please contact the corresponding author. Participant selection We used a snowball sampling method to recruit general pediatricians for interviews. The snowball sampling method is a non-probability sampling technique where existing study subjects will nominate future subjects from their acquaintances for possible interviewing. This method eliminated any potential selection bias from the investigators. The first interviewee was randomly selected from the general pediatric chairmen in Riyadh, Saudi Arabia. For logistical reasons, a residency program at the principal investigator’s institution was selected for inviting pediatric residents to participate. All senior residents’ names were organized in a list and numbered. Senior residents with odd numbers were selected for interviews. Ten senior residents qualified to participate using this method, but only six consented to partake in our study. Data analysis : We adopted a thematic analysis and hermeneutic phenomenology to analyze our data. First, transcriptions of interviews were reviewed using a thematic analysis to identify common denominators. This analysis allowed for themes to emerge to understand the common consensus among participants. Each theme emerged following a six-step analysis: familiarization, coding, reviewing, generating, defining, and writing. Later, adopting hermeneutic phenomenology allowed for illuminating all details and shinning a light on trivial aspects from interviews. Those aspects helped in understanding attitude and participants’ prospective. All analyses were done manually and no computer-based analysis was used. Study ethics The study was IRB approved by King Fahad Medical City, Riyadh, Saudi Arabia, with IRB numbers 20–574. Participants provided consent before partaking in the audio recording and were allowed to leave and withdraw their participation prior to data analysis. All data were kept confidential and used for the purpose of this study only. One interviewer conducted all of the interviews in order to minimize bias. The interviewer was trained to use only designed props and limit facial expressions and interruptions as much as possible. This study adopted a qualitative approach, using video calls to practice social distancing during the COVID-19 pandemic. Interviews included 10–15 practitioners of general pediatrics in Saudi Arabia. The interviews included experienced pediatricians (each working for longer than ten years), early career pediatricians, and senior pediatric residents. Inclusion criteria were that the participants had to be a Saudi national, practicing physician, having active or educational license, and willing to consent to participate in an one-hour interview. By interviewing doctors at different experience levels and of both genders, we were able to weigh different insights and be more inclusive. All interviews were audio recorded and transcribed later. Facial expression, gestures, body language, tone, and all forms of non-verbal communications were documented to aid in data analysis. Interview questions were all open-ended to capture a wide range of insights. The questions were developed based on literature review, practice observations, and experienced pediatricians’ recommendations. To help participants elaborate when needed, certain props were agreed on for each question, and the interviewer was not allowed to use other tools to enrich the discussion. To access our interview questions, please contact the corresponding author. We used a snowball sampling method to recruit general pediatricians for interviews. The snowball sampling method is a non-probability sampling technique where existing study subjects will nominate future subjects from their acquaintances for possible interviewing. This method eliminated any potential selection bias from the investigators. The first interviewee was randomly selected from the general pediatric chairmen in Riyadh, Saudi Arabia. For logistical reasons, a residency program at the principal investigator’s institution was selected for inviting pediatric residents to participate. All senior residents’ names were organized in a list and numbered. Senior residents with odd numbers were selected for interviews. Ten senior residents qualified to participate using this method, but only six consented to partake in our study. Data analysis : We adopted a thematic analysis and hermeneutic phenomenology to analyze our data. First, transcriptions of interviews were reviewed using a thematic analysis to identify common denominators. This analysis allowed for themes to emerge to understand the common consensus among participants. Each theme emerged following a six-step analysis: familiarization, coding, reviewing, generating, defining, and writing. Later, adopting hermeneutic phenomenology allowed for illuminating all details and shinning a light on trivial aspects from interviews. Those aspects helped in understanding attitude and participants’ prospective. All analyses were done manually and no computer-based analysis was used. The study was IRB approved by King Fahad Medical City, Riyadh, Saudi Arabia, with IRB numbers 20–574. Participants provided consent before partaking in the audio recording and were allowed to leave and withdraw their participation prior to data analysis. All data were kept confidential and used for the purpose of this study only. One interviewer conducted all of the interviews in order to minimize bias. The interviewer was trained to use only designed props and limit facial expressions and interruptions as much as possible. Study participants The study was able to attract pediatricians from four hospitals in Riyadh, Saudi Arabia. Many pediatricians (a total of five) declined to participate and did not consent to be audio recorded due to the novelty of medical qualitative research in Saudi Arabia. Even residents were hesitant, and four excused themselves from participating. Table . Value and attitude All consultants and trainees appreciate general pediatricians’ roles in Saudi society. They understand the value of their care and feel they have a more significant part to play in population growth and the Saudi 2030 vision. A pediatrician stated, “General pediatric is a basic necessity next to education for Saudi society, ” and another pediatrician said, “General pediatric is the cornerstone of pediatric medicine, and no hospital can function without it.” However, trainees had a negative attitude toward general pediatrics as a specialty. One trainee stated, “It (general pediatrics) is a non-specialized field (laughing), yes, really!” Type and time of Care The majority of participants think that general pediatricians should be considered primary care providers in Saudi Arabia. They believe in their duty to both healthy and sick children. The idea of having a general pediatrician for each child in the Kingdom was a common goal among participants. Still, all pediatricians voiced concerns about the lack of qualified providers to meet public demands. One pediatrician stated, “Saudi public is ready for the idea of a pediatrician for every child, but we do not have enough doctors.” Regarding the current “Well Baby Clinic Module of Care,” all participants raised some concerns. From being overcrowded to the lack of qualified providers, the module is not meeting the Ministry of Health’s designed standard of care. A pediatric resident protested, “Clinic (well-baby clinic) is open for vaccines (only).” The current scheduled first visit at the age of two months was subject to criticism by all participants. They all wished for an earlier first visit, ranging from between two days from discharge to a maximum of the first two weeks of life. The first visit can be used for early detection, breastfeeding support, follow-up on jaundice, and anticipatory guidance for parents, especially first-time parents. All of the pediatricians shared stories of complications of the current delayed first visit. One pediatrician had a child developing kernicterus because of jaundice and delayed presentation to pediatric care. Age The majority of consultants suggested expanding the age of a pediatric patient beyond 14, which is the current cut-off age for pediatric care in most hospitals in Saudi Arabia. Five of the six consultants proposed that 18 be the new cut-off age, with one pediatrician asking to push it up to 21. Only one consultant was content with the current cut-off age. However, the consultant would not mind expanding the pediatric age limit to 18 if the rules of pediatric practice changed in Saudi Arabia. In contrast, residents were reluctant to extend the current age further for all children and suggested customizing the age limit based on needs. A resident declared some concerns about the challenges of caring for adolescents. This was echoed by a pediatric consultant who stated that “Youths between ages 14–18 years are lost, adults and we refuse to take care of them. I think we can provide them care as long as we address some cultural sensitivity and follow Islamic rules.” Furthermore, most consultants doubted the readiness of the current pediatric workforce to provide good care for youths. Nevertheless, a pediatrician announced, “This generation is better than mine, and they will be ready because they have better training.” New era, new issues All of the participants declared some concerns about caring for children with developmental delays and behavioral issues. Some residents asked for more training despite having four weeks of training in child development in the current Saudi pediatric residency curriculum. However, pediatricians think the problem is poor exposure rather than lack of knowledge. One pediatrician blamed the current general pediatric culture of immediately referring every behavioral issue instead of managing it and gaining more experience when he said, “At the least, we have to start treating ADHD as it is common and easy to diagnose. Referral to child development takes up to six months; who will help the child until then?” Furthermore, vaccine hesitancy was identified as a new issue that pediatricians are facing in Saudi Arabia. Another pediatrician expected to deal more with substance use and eating disorders in the near future. Challenges The biggest challenge that all participants recognized is the high public demands and the low number of pediatricians per capita. A pediatrician emphasized that the needs might increase even further with the current Saudi population growth and fertility rates. Furthermore, all of the participants found the time allocated for each visit to be challenging. Another challenge acknowledged was the lack of community pediatricians outside of tertiary and secondary hospitals. All of the pediatricians agreed that separating well-baby clinics from general pediatric practice lowered the quality of care for infants. All of the participants admitted that poor communication skills are one of the existing barriers to improving general pediatric care. Likewise, all of the participants considered it a significant challenge to find Arabic resources to educate the parents. One resident demanded, “Actually, we lack Arabic materials.” Opportunities All participants were proud of the progress that Saudi pediatric training has made through the years. Participating pediatricians asked for improvement in leadership skills and obtaining more procedures in the Saudi pediatric residency program. Similarly, all participants were happy that there was mandatory communication skills training during the Saudi pediatric residency, but asked for further training. Residents want to be involved in designing and improving their training. Another resident asked to make education “less intimidating.” With the higher number of graduates, some public needs can be met, and care can be improved. Nevertheless, we need to encourage more residency graduates to pursue a career in general pediatrics, as three pediatricians explained. One pediatrician suggested adding a “Billing System” to attract more graduates and encourage better productivity than the current base salary system. Another pediatrician demanded channeling the pediatric residency focus to meet the Saudi public’s needs, especially in addressing autosomal recessive syndromes, complex care, and car safety. On the other hand, all participants had a positive attitude regarding virtual health and thought it can improve access to care, especially for patients from rural areas. Lacking a physical exam might be a downside of this innovative approach as one pediatrician and two residents mentioned. Resources All of the participants identified the American Academy of Pediatrics (AAP) as their primary reference. The Canadian Pediatric Society (CPS) and UpToDate came second in most of the participants’ lists of resources. All of the participants complained about the cost of accessing these references, except for CPS, which is a free educational platform. Two residents recognized NJEM Plus—a continuous self-learning tool from the Saudi Commission for Health Specialties—as a good reference. All of the participants were aware of the Saudi Pediatric Association (SPA). A pediatrician appreciated SPA’s continuous medical education hours while others wished for more. They hoped to see it more involved in building the Saudi pediatric practice guidelines, advocacy, providing Arabic materials, building young leaders, promoting pediatric science, addressing Saudi culture, and being a voice for pediatricians. The study was able to attract pediatricians from four hospitals in Riyadh, Saudi Arabia. Many pediatricians (a total of five) declined to participate and did not consent to be audio recorded due to the novelty of medical qualitative research in Saudi Arabia. Even residents were hesitant, and four excused themselves from participating. Table . All consultants and trainees appreciate general pediatricians’ roles in Saudi society. They understand the value of their care and feel they have a more significant part to play in population growth and the Saudi 2030 vision. A pediatrician stated, “General pediatric is a basic necessity next to education for Saudi society, ” and another pediatrician said, “General pediatric is the cornerstone of pediatric medicine, and no hospital can function without it.” However, trainees had a negative attitude toward general pediatrics as a specialty. One trainee stated, “It (general pediatrics) is a non-specialized field (laughing), yes, really!” The majority of participants think that general pediatricians should be considered primary care providers in Saudi Arabia. They believe in their duty to both healthy and sick children. The idea of having a general pediatrician for each child in the Kingdom was a common goal among participants. Still, all pediatricians voiced concerns about the lack of qualified providers to meet public demands. One pediatrician stated, “Saudi public is ready for the idea of a pediatrician for every child, but we do not have enough doctors.” Regarding the current “Well Baby Clinic Module of Care,” all participants raised some concerns. From being overcrowded to the lack of qualified providers, the module is not meeting the Ministry of Health’s designed standard of care. A pediatric resident protested, “Clinic (well-baby clinic) is open for vaccines (only).” The current scheduled first visit at the age of two months was subject to criticism by all participants. They all wished for an earlier first visit, ranging from between two days from discharge to a maximum of the first two weeks of life. The first visit can be used for early detection, breastfeeding support, follow-up on jaundice, and anticipatory guidance for parents, especially first-time parents. All of the pediatricians shared stories of complications of the current delayed first visit. One pediatrician had a child developing kernicterus because of jaundice and delayed presentation to pediatric care. The majority of consultants suggested expanding the age of a pediatric patient beyond 14, which is the current cut-off age for pediatric care in most hospitals in Saudi Arabia. Five of the six consultants proposed that 18 be the new cut-off age, with one pediatrician asking to push it up to 21. Only one consultant was content with the current cut-off age. However, the consultant would not mind expanding the pediatric age limit to 18 if the rules of pediatric practice changed in Saudi Arabia. In contrast, residents were reluctant to extend the current age further for all children and suggested customizing the age limit based on needs. A resident declared some concerns about the challenges of caring for adolescents. This was echoed by a pediatric consultant who stated that “Youths between ages 14–18 years are lost, adults and we refuse to take care of them. I think we can provide them care as long as we address some cultural sensitivity and follow Islamic rules.” Furthermore, most consultants doubted the readiness of the current pediatric workforce to provide good care for youths. Nevertheless, a pediatrician announced, “This generation is better than mine, and they will be ready because they have better training.” All of the participants declared some concerns about caring for children with developmental delays and behavioral issues. Some residents asked for more training despite having four weeks of training in child development in the current Saudi pediatric residency curriculum. However, pediatricians think the problem is poor exposure rather than lack of knowledge. One pediatrician blamed the current general pediatric culture of immediately referring every behavioral issue instead of managing it and gaining more experience when he said, “At the least, we have to start treating ADHD as it is common and easy to diagnose. Referral to child development takes up to six months; who will help the child until then?” Furthermore, vaccine hesitancy was identified as a new issue that pediatricians are facing in Saudi Arabia. Another pediatrician expected to deal more with substance use and eating disorders in the near future. The biggest challenge that all participants recognized is the high public demands and the low number of pediatricians per capita. A pediatrician emphasized that the needs might increase even further with the current Saudi population growth and fertility rates. Furthermore, all of the participants found the time allocated for each visit to be challenging. Another challenge acknowledged was the lack of community pediatricians outside of tertiary and secondary hospitals. All of the pediatricians agreed that separating well-baby clinics from general pediatric practice lowered the quality of care for infants. All of the participants admitted that poor communication skills are one of the existing barriers to improving general pediatric care. Likewise, all of the participants considered it a significant challenge to find Arabic resources to educate the parents. One resident demanded, “Actually, we lack Arabic materials.” All participants were proud of the progress that Saudi pediatric training has made through the years. Participating pediatricians asked for improvement in leadership skills and obtaining more procedures in the Saudi pediatric residency program. Similarly, all participants were happy that there was mandatory communication skills training during the Saudi pediatric residency, but asked for further training. Residents want to be involved in designing and improving their training. Another resident asked to make education “less intimidating.” With the higher number of graduates, some public needs can be met, and care can be improved. Nevertheless, we need to encourage more residency graduates to pursue a career in general pediatrics, as three pediatricians explained. One pediatrician suggested adding a “Billing System” to attract more graduates and encourage better productivity than the current base salary system. Another pediatrician demanded channeling the pediatric residency focus to meet the Saudi public’s needs, especially in addressing autosomal recessive syndromes, complex care, and car safety. On the other hand, all participants had a positive attitude regarding virtual health and thought it can improve access to care, especially for patients from rural areas. Lacking a physical exam might be a downside of this innovative approach as one pediatrician and two residents mentioned. All of the participants identified the American Academy of Pediatrics (AAP) as their primary reference. The Canadian Pediatric Society (CPS) and UpToDate came second in most of the participants’ lists of resources. All of the participants complained about the cost of accessing these references, except for CPS, which is a free educational platform. Two residents recognized NJEM Plus—a continuous self-learning tool from the Saudi Commission for Health Specialties—as a good reference. All of the participants were aware of the Saudi Pediatric Association (SPA). A pediatrician appreciated SPA’s continuous medical education hours while others wished for more. They hoped to see it more involved in building the Saudi pediatric practice guidelines, advocacy, providing Arabic materials, building young leaders, promoting pediatric science, addressing Saudi culture, and being a voice for pediatricians. Pediatrics pioneers and first-generation child health advocates in the young Kingdom of Saudi Arabia have accomplished a lot in a short period of time. Their accomplishments are measured by considerable improvement in child health and lower mortality rates. Now, only seven children per 1000 die before their fifth birthday compared to 160 children in 1972 . Despite suggestions from study participants to offer general pediatrics as a primary care service, it would be impractical to do so with the limited number of current practicing general pediatricians and the lack of community pediatricians. Integrating general pediatrics with family physician practices and offering pediatric training to family medicine trainees can be temporary alternatives until a higher workforce is available. Successful examples of training family medicine trainees are well established in countries like Canada . Luckily, such training is already in place in some parts of Saudi Arabia. Furthermore, having community pediatricians is crucial and cost-effective in Saudi Arabia with the overutilization of pediatric emergency rooms . A study by Porter B. et al. showed that a pediatric community practice, rather than a tertiary-based general pediatric, can lower a child’s number of emergency visits . Pediatric care may start periconceptionally and proceed from early gestation to early adulthood. The AAP previously released a statement on the age limit for pediatrics in 1988, which was reaffirmed in 2012, and established the upper age limit as 21 years . Alternatively, the CPS defines the upper age limit as 18 years of age. Despite recommendations from the Saudi Health Council to treat children until the age of 16 in pediatrics, the pediatric age limit varies among institutions and wide between 12 and 14 years old. Knowing that nearly 30% of the Saudi population is under 14 years of age obligates a better-accustomed age limit of pediatrics in order to include all adolescents, especially middle and late adolescents (15–18 years old). Communication skills have been a hot topic of discussion in Saudi pediatric literature. They have always been criticized and deemed deficient . On the bright side, awareness of poor communication skills among trainees and pediatricians has improved, compared to the findings in an earlier report . Additionally, it is promising to see initiatives already in place to improve communication skills during residency training. However, communication skills training should start at earlier stages of medical education in medical schools and be culturally appropriate. Recent reports documented a high prevalence of pervasive developmental disorders in Saudi Arabia . Additionally, consanguinity has been linked to developmental delay . Consanguinity is a common practice in Saudi Arabia . Dealing with behavioral and developmental issues should not be limited to pediatric development specialists and child psychiatrists. Pediatric residency should prepare future pediatricians to address these issues, and some initiatives have already been implemented. General pediatric residency training programs in Saudi Arabia have evolved tremendously over the past decade. Pediatric residency training is intended to instill the expertise, skills, and attitudes needed for family-centered healthcare. Likewise, it needs to have a prominent role in meeting complex 21st century health needs and demonstrating an overlap between clinical pediatrics and public health issues. Most importantly, it needs to address public needs while practicing culturally appropriate care. Having giant and well-resourced organizations as references shows an eagerness to learn and follow recent updates among Saudi pediatricians and trainees. However, there is a need for a local organization to address culturally sensitive topics and unique problems affecting Saudi children and youths, like fasting Ramadan for youths with type 1 diabetes or addressing the prevalence of certain metabolic diseases because of consanguinity. Fortunately, there is a well-established SPA, and hopes are high for its future role in generating practice guidelines, advocacy, health literacy, and meeting Saudi children’s needs. Saudi general pediatrics is well established and has made considerable contributions to Saudi society. It needs to recruit more residency graduates in order to meet public demands and 21st century needs. The goal is to have a primary care general pediatric service starting with an early first visit in the first few days of life for every child in Saudi Arabia. Pediatricians want to advance the current age limit to include more adolescents. They feel unready to address developmental delay and behavioral issues, and ask for more exposure to such cases. High demands, a low number of qualified physicians, poor communication skills, limited allocated clinic time, unsuccessful well-baby clinic design, and lack of enough community pediatricians are significant challenges for general pediatrics in Saudi Arabia. On the other hand, the current pediatric residency training gives a lot of hope for a brighter future. From training residents on communication skills to addressing developmental delays and managing adolescents, the next generation of general pediatricians will thrive in the field. More importantly, they will continue improving child and youth health in Saudi Arabia and beyond. The number of participants was limited due to the novelty of qualitative methodological studies in Saudi Arabia. Many pediatricians were not comfortable with this approach and declined to participate. Even pediatric residents expressed dissatisfaction with this method. Additionally, it was hard to arrange one-hour interviews around their busy clinical schedules. All of the participants showed considerable reluctance to record the interviews. Another significant limitation was the inability to recruit pediatricians from other cities, despite all efforts. However, this paper can set foundations for many more studies to help in improving general pediatrics in Saudi Arabia. RIA: Acquisition of data, analysis and interpretation of data, revising the manuscript, and final approval of the version to be published. SMA: Substantial contributions to conception and design, drafting the article, revising it, and final approval of the version to be published. |
Are Text‐Message Based Programmes Targeting Adolescents and Their Parents an Acceptable Approach to Preventing Adolescent e‐Cigarette Use? | 018a1cc5-116b-453c-861c-01ce2c7c96c7 | 11811806 | Health Promotion[mh] | Introduction Adolescent e‐cigarette use (also known as vaping) has steadily increased in recent years, globally and within Australia, with e‐cigarettes now the most heavily used nicotine‐containing products amongst adolescents . A recent report by the Australian Institute of Health and Welfare indicated a five‐fold increase in current e‐cigarette use by 14–17 year olds from 2019 to 2022 . A systematic review of global evidence concluded that serious adverse effects posed by e‐cigarettes include acute lung injury, poisoning, burns and immediate toxicity through inhalation, including seizures . The review also concluded that amongst non‐smoking adolescents, e‐cigarettes provide no health benefits and double the odds of future tobacco use . The rise in youth e‐cigarette use is a considerable concern to parents and caregivers. Surveys of Australian parents have reported that whilst 70%–80% of parents/guardians (referred to as parents throughout for simplicity) were concerned that their child may try e‐cigarettes, more than half of parents (57%) had never discussed e‐cigarettes with their children . There are many known barriers to parents and children discussing e‐cigarettes but evidence indicates that parents can play a positive role in the prevention of adolescent intentions to use e‐cigarettes . Given these concerns, preventing adolescent e‐cigarette use is a public health priority, and involving parents is a potentially potent strategy to help achieve this. In recent years, governments have taken legislative action to limit the supply of e‐cigarettes to adolescents, both internationally and within Australia. In 2024, the Australian government introduced a ban on the importation of all disposable e‐cigarettes, designed to work in conjunction with legislation that prohibited the supply of e‐cigarettes (both nicotine and those marketed as nicotine‐free) to individuals under the age of 18 years . The World Health Organization and other (inter)national public health agencies recommend efforts to curb the emerging public health issue be enhanced by supplementing such legislative action with public health programmes that focus on education and communication . Text‐message interventions have proven to be an effective public health approach to improving other adolescent health behaviours , including tobacco use, due to universal access to mobile phones and high engagement with digital technologies, while addressing the barriers to traditional interventions, such as cost, transportation and stigma . Text‐messages also have the ability to be scalable and rapidly deployed through existing infrastructure. Online health coaching service (e.g., NSW Health's “Get Healthy”) and school‐based communication platforms already provide text‐message programmes for other health behaviours (e.g., nutrition and obesity prevention) and could be easily expanded to include e‐cigarettes . Existing evaluations of e‐cigarette text‐message programmes have predominately focused on e‐cigarette cessation and have targeted older age groups (e.g., university students) . One prevention study conducted in the United States found that delivering text‐messages, focusing on the health harms of e‐cigarettes, were feasible and acceptable to adolescents aged 14–18 years . However, it did not target or assess acceptability of the messages amongst parents, who play an integral role in influencing adolescent e‐cigarette behaviours . The acceptability of employing such an approach to deliver a more comprehensive series of messages (including those aimed at developing self‐efficacy and behavioural control) is also unknown. Such information is needed prior to broader investment in the approach to determine if text‐messages align with the values, preferences and needs of end‐users, thus making it more likely for them to engage with such programmes and lead to better outcomes. This study therefore aimed to explore the acceptability of a series of text‐messages, distributed to adolescents and their parents to target factors (i.e., barriers and enablers) associated with adolescent e‐cigarette use.
Methods 2.1 Participants Adolescents aged between 12 and 15 years and their parent were eligible to participate in the study. The age range of 12–15 years was targeted as the intervention was aiming to prevent e‐cigarette use prior to commencement, and the average age of e‐cigarette initiation in Australia has been reported to be less than 15 years . Participants were recruited through a number of channels including social media advertising, school newsletters, advertisements on community noticeboards, as well as contacting parents who had participated in previous research studies conducted by the research team. Interested parents completed an expression of interest form and were contacted by a research assistant to obtain consent. The research assistant then contacted the adolescent to obtain their assent. The study was approved by the University of Newcastle Human Research Ethics Committee (H‐2022‐0340). This study was conducted as part of a larger factorial randomised controlled trial (RCT) (ACTRN12623000079640), in which participants were allocated to one of four arms: (1) parent‐only text‐messages, (2) adolescent‐only text‐messages, (3) parent and adolescent text‐messages and (4) control. Only participants who received text‐messages (i.e., parents in groups 1 and 3, and adolescents in groups 2 and 3) were provided the text‐message acceptability questions reported here. 2.2 Intervention Adolescents and parents received a series of text‐messages, one delivered per week over 12 weeks. Each message addressed a different factor(s) associated with adolescent e‐cigarette use, including knowledge of harmful health effects, peer influence and social norms, refusal skills and the availability of parent support. These theoretically informed text‐messages were developed through a comprehensive co‐design process, with factors identified through an extensive scoping review . The co‐design process included focus groups, surveys and iterative text‐message writing activities conducted with parents, adolescents, parenting research experts, Aboriginal health workers and health promotion officers and managers, e‐cigarette experts and behavioural scientists to develop and refine the messages. To foster positive conversation amongst families, both the adolescent and parent text‐messages targeted similar factors each week. However, the specific content of each text‐message was tailored to each target group (i.e., adolescent or parent). Text‐messages were personalised by using the recipient's name and were sent at the time nominated as most suitable by the recipient each week. The intervention has been described in more detail elsewhere . 2.3 Outcomes Six months after receiving the first text‐message, the acceptability of the text‐messages was assessed via an online survey. Adolescents and parents were asked to indicate their agreement to a range of statements using a 5‐point Likert scale (strongly disagree to strongly agree). Participants who responded “strongly agree” or “agree” were combined to present the number who agreed with each statement. Subgroup analysis was also conducted to determine if agreement changed based on ever use of e‐cigarettes, socio‐economic status or rurality. Ever use was assessed by asking participants to indicate “Yes” or “No” to the following statement: “Have you ever tried an e‐cigarette or vaping device, even one or two puffs?”. 2.4 Data Analysis Frequencies and percentages were calculated using Microsoft Excel for the outcomes. Participant postcodes were used to classify socioeconomic status and rurality. Participants with a postcode in the top five deciles were classified as being located in a less disadvantaged (high socioeconomic status) area and participants with a postcode in the bottom five deciles were classified as being in a more disadvantaged (low socioeconomic status) area as per the Index for Relative Socio‐Economic Disadvantage (2021 Socio‐Economic Indexes for Australia) . Postcodes were also used to classify participant geolocation as “major cities” or “regional/remote” based on the Australian Bureau of Statistics Remoteness Areas .
Participants Adolescents aged between 12 and 15 years and their parent were eligible to participate in the study. The age range of 12–15 years was targeted as the intervention was aiming to prevent e‐cigarette use prior to commencement, and the average age of e‐cigarette initiation in Australia has been reported to be less than 15 years . Participants were recruited through a number of channels including social media advertising, school newsletters, advertisements on community noticeboards, as well as contacting parents who had participated in previous research studies conducted by the research team. Interested parents completed an expression of interest form and were contacted by a research assistant to obtain consent. The research assistant then contacted the adolescent to obtain their assent. The study was approved by the University of Newcastle Human Research Ethics Committee (H‐2022‐0340). This study was conducted as part of a larger factorial randomised controlled trial (RCT) (ACTRN12623000079640), in which participants were allocated to one of four arms: (1) parent‐only text‐messages, (2) adolescent‐only text‐messages, (3) parent and adolescent text‐messages and (4) control. Only participants who received text‐messages (i.e., parents in groups 1 and 3, and adolescents in groups 2 and 3) were provided the text‐message acceptability questions reported here.
Intervention Adolescents and parents received a series of text‐messages, one delivered per week over 12 weeks. Each message addressed a different factor(s) associated with adolescent e‐cigarette use, including knowledge of harmful health effects, peer influence and social norms, refusal skills and the availability of parent support. These theoretically informed text‐messages were developed through a comprehensive co‐design process, with factors identified through an extensive scoping review . The co‐design process included focus groups, surveys and iterative text‐message writing activities conducted with parents, adolescents, parenting research experts, Aboriginal health workers and health promotion officers and managers, e‐cigarette experts and behavioural scientists to develop and refine the messages. To foster positive conversation amongst families, both the adolescent and parent text‐messages targeted similar factors each week. However, the specific content of each text‐message was tailored to each target group (i.e., adolescent or parent). Text‐messages were personalised by using the recipient's name and were sent at the time nominated as most suitable by the recipient each week. The intervention has been described in more detail elsewhere .
Outcomes Six months after receiving the first text‐message, the acceptability of the text‐messages was assessed via an online survey. Adolescents and parents were asked to indicate their agreement to a range of statements using a 5‐point Likert scale (strongly disagree to strongly agree). Participants who responded “strongly agree” or “agree” were combined to present the number who agreed with each statement. Subgroup analysis was also conducted to determine if agreement changed based on ever use of e‐cigarettes, socio‐economic status or rurality. Ever use was assessed by asking participants to indicate “Yes” or “No” to the following statement: “Have you ever tried an e‐cigarette or vaping device, even one or two puffs?”.
Data Analysis Frequencies and percentages were calculated using Microsoft Excel for the outcomes. Participant postcodes were used to classify socioeconomic status and rurality. Participants with a postcode in the top five deciles were classified as being located in a less disadvantaged (high socioeconomic status) area and participants with a postcode in the bottom five deciles were classified as being in a more disadvantaged (low socioeconomic status) area as per the Index for Relative Socio‐Economic Disadvantage (2021 Socio‐Economic Indexes for Australia) . Postcodes were also used to classify participant geolocation as “major cities” or “regional/remote” based on the Australian Bureau of Statistics Remoteness Areas .
Results Of the 40 adolescents (one adolescent withdrew before 6 month follow‐up) and 41 parents who received the intervention and were eligible to report on acceptability at 6‐month follow‐up, data was available from 30 adolescents (75%) and 35 parents (85%). As can be seen in Table , findings indicate the majority of adolescents (77%) and parents (94%) agreed that they found the text‐messages to be an acceptable way of receiving information about e‐cigarettes. In addition, 86% of parents agreed that the text‐messages were useful for improving their ability to discuss e‐cigarettes with their child and 91% of parents agreed they would recommend the programme to other parents. Compared to parents, a slightly lower proportion of adolescents (73%) agreed to the statement that they would recommend the programme to other adolescents and just over half the adolescents (60%) agreed the messages helped their ability to refuse e‐cigarettes. Both parents (89%) and adolescents (77%) agreed the text‐messages were useful for improving their knowledge around e‐cigarettes. Levels of acceptability were broadly similar for parents from areas of both high and low socioeconomic status (91% vs. 100%, respectively), and from major cities compared to regional/remote areas (97% vs. 83%, respectively). There was a particularly high rate of acceptability of the text‐messages amongst parents who also identified as ever‐users of e‐cigarettes, with 100% of these parents agreeing the text‐messages were acceptable, useful for improving their knowledge, and would recommend them to other parents. Interestingly, a smaller proportion of adolescents who were from a major city agreed that the messages improved their ability to refuse e‐cigarettes compared to those from a regional/remote area (56% vs. 80% respectively). Adolescent ever‐users agreed less that the text‐messages increased their knowledge around potential harms of e‐cigarettes compared to non‐users (57% vs. 83%, respectively).
Discussion The findings of this study demonstrate that text‐messages are an acceptable method for delivering content designed to prevent e‐cigarette use amongst adolescents. Encouragingly, both adolescents and their parents agreed the messages were acceptable, and the majority would recommend the programme to their peers. This finding is consistent with other emerging research in the area that found text‐messages about e‐cigarette harms were acceptable to adolescents . There was a trend for levels of acceptability to be higher among parents compared to adolescents, across all acceptability questions. This may reflect that parents found the content of the text‐messages more relevant than adolescents, potentially perceiving e‐cigarette use amongst youth as more of an issue than the adolescents themselves, given previous research with Australian adolescents has found e‐cigarette use is perceived as common and normal . It could also be that parents are rarely receiving information about vaping through other mediums, whereas adolescents may have higher levels of exposure to messaging (e.g., through school) and perhaps receiving this information via text‐message is less salient to them. Further research is required to explore the reasons for the varied levels of acceptability between parents and adolescents. We found particularly high levels of acceptability of the text‐message programme amongst parents who identified as ever‐users of e‐cigarettes. While the current study design does not permit us to explore the reasons for this, it suggests that future research focussed on parents who have experience using e‐cigarettes may provide unique insights into the role of these parents in preventing e‐cigarette use in their adolescent children. Levels of agreement that the text‐messages improved ability to refuse e‐cigarettes were lower amongst adolescents who were classified as coming from a major city. This may be reflective of the environments in which the adolescents were living (e.g., availability of vapes) and should be investigated further to determine what could be causing the difference. Levels of acceptability were similar across both low and high socioeconomic areas, although it should be noted the sample was primarily from high socioeconomic areas. A limitation is the use of postcode to determine socioeconomic status, as this reflects a population‐based indicator of socioeconomic status only. Future studies could determine this more accurately by using a measure such as self‐reported household income. Further limitations include the small sample size which precluded us from using inferential statistics to determine the statistical significance of any differences. Exploring reasons for differences in acceptability was also beyond the scope of this study but could be explored using qualitative approaches in future work.
Conclusions Findings of the evaluation indicate that a text‐message programme targeting adolescents and their parents is an acceptable and promising approach to prevent adolescent e‐cigarette uptake. However, the effectiveness of the programme on adolescent e‐cigarette behaviours needs to be established prior to broader scale‐up and investment. As noted previously, this study is part of a broader factorial RCT currently underway testing the potential effect of the text‐messages on adolescent susceptibility to, and use of, e‐cigarettes and tobacco. Findings from this body of research will contribute to a currently limited evidence base and provide policymakers, practitioners and funders with guidance on the types of health promotion interventions targeting adolescents and their parents that are potentially effective in addressing this public health priority.
Ethics approval to conduct the research was obtained from the University of Newcastle Human Research Ethics Committee (H‐2022‐0340).
The authors declare no conflicts of interest.
|
Health literacy profiles of final year pre‐service teachers in two initial education programs compared with the general population: A cross‐sectional study using the Health Literacy Questionnaire | 1aa379b0-ec16-4e09-92ec-d97ad6a668d9 | 11730663 | Health Literacy[mh] | INTRODUCTION Non‐communicable diseases (NCDs) are the leading cause of mortality and account for over 74% of the global burden of disease (WHO, 2019). NCDs has led to 277 million premature deaths among people aged between 30 and 70 years who reside in low‐income and middle‐income countries from 2000 to 2019. NCDs have also emerged as the leading causes of death and ill‐health in high‐income countries, including Australia. The Australian Institute of Health and Welfare (AIHW) found that the top five disease groups, accounting for more than two‐thirds of the burden of disease in Australia (i.e., the impact of living with illness and injury and dying prematurely), were cancer (18% of the total burden), followed by musculoskeletal conditions, cardiovascular diseases, mental health conditions and substance use disorders (each 13%). These diseases disproportionately affect people living in: (1) the Northern Territory (1.4× higher than those living in other states); (2) remote and very remote areas (1.4× higher than those living in major cities); and (3) lower socio‐economic regions and areas (1.6× higher than those living in higher socio‐economic areas), increasing health inequity within certain populations. Therefore, the burden of NCDs hits hardest on socially or economically disadvantaged countries, as well as socially and economically disadvantaged people who reside in middle‐ and high‐income countries, affecting them and those around them, and contributing to health inequities experienced in these populations. , One of the United Nations' Sustainable Development Goal targets (SDG target 3.4) is focused on reducing premature mortality from NCDs by one third through prevention, treatment and promotion of health and wellbeing. The 9th Global Conference on Health Promotion in Shanghai in 2016, named health literacy (HL) as one of the key health promotion pillars to achieve the 2030 Agenda for Sustainable Development Goals, , as HL can reshape the prevalence and distribution of NCDs and associated risk factors. Prior to the Shanghai Conference, a policy brief on HL developed by the World Health Organisation's (2015) European Observatory on Health Systems and Policies identified several benefits for education sectors and schooling regarding the promotion of HL. These included: (1) increased academic performance; (2) improved health outcomes; and (3) greater cost‐effectiveness. These two pivotal moments have led to HL being recognised as a social determinant that can be developed throughout the lifespan from the early years, , , and the importance of educational institutions as powerful settings to foster HL knowledge and capabilities in children, as schools can reach most, if not all young people, from across a range of social and economic backgrounds. , Since then, schools in the United States, Australia, Scotland (Scottish Government, ) and Finland have addressed HL in the curriculum, most often as part of health education and a holistic approach to school health promotion. HL has two interrelated components: individual HL and the HL environment. Individual health literacy refers to one's ability to find, understand and communicate health information, including the capacity to make critical judgements about health claims. Health literacy capabilities can empower individuals to make informed health decisions, practice healthier behaviours and modify individual determinants of health. The HL environment refers to the external supports around an individual that facilitate them to make educated and autonomous health decisions. For instance, a whole school approach where school health education and the school environment through policies and practices, can support students' health and HL development and practice of healthy eating choices throughout the school day through being capable of accessing, consuming and purchasing healthy options at the breakfast clubs/school canteen/food providers. Promoting individual HL and developing capabilities that provide schools and teachers opportunities to promote positive, supportive and enabling HL environments for themselves, colleagues and students are imperative and are the focus of this study. The current Australian HPE curriculum (AC:HPE) and the NSW Personal Development, Health and Physical Education (PDHPE) syllabus have both adopted HL as a key concept to be taught from Foundation to Year 10, through HL being named as one of the five underpinning propositions, represented as ‘developing health literacy’. Health literacy is described in the AC: HPE and NSW PDHPE curriculum as ‘an individual's ability to gain access to, understand and use health information and services in ways that promote and maintain health and wellbeing’ (p. 25), with HL implicitly embedded through Nutbeam's , three‐level hierarchal model of functional, interactive and critical HL levels. This is applied to the syllabus across several focus areas, including alcohol and other drugs, food and nutrition, the health benefits of physical activity, mental health and wellbeing, relationships and sexuality, safety, challenge and adventure activities, games and sports, lifestyle physical activities and rhythmic and expressive movement activities. The presence of HL as a proposition, using Nutbeam's , model, is indicative of the importance of HL and its role in contributing to the holistic development of Australian children and youth. , However, as HL is positioned as a proposition in the AC:HPE and NSW PDHPE syllabus, it is important to note that it is not clearly articulated in outcome statements, which teachers use for programming and planning purposes. Recent Australian studies have focused on the need to equip secondary and primary teachers to teach HPE to develop students' HL, , , , and to navigate the contextual, time and curriculum constraints which marginalise the teaching of learning areas, such as HPE. According to the Australian Professional Standards for Teachers, a well‐trained teacher, who has developed professional content knowledge and pedagogies, and who knows their students, will generally be confident in teaching their learning area, which includes health education and providing an environment conducive for HL promotion. In Australia, the Health and Physical Education (HPE) teacher is primarily responsible for teaching HPE and therefore is considered a well‐trained specialist. HPE teachers often teach in a secondary school setting. Primary school teachers, who are considered generalists, are expected to develop HL through their HPE programming, with much less education in this area, as they are responsible for learning and teaching several learning areas and must often balance other competing priorities and demands. With the AC:HPE curriculum being in practice now for approximately 8 years and the NSW PDHPE syllabus being implemented for at least 5 years, it is surprising that Australian primary and secondary teachers still feel under‐prepared to teach HPE to enhance students' HL. , However, when engaged in professional development programs focusing on HL, Australian primary and secondary HPE teachers' HL pedagogical knowledge and confidence to teach HPE to enhance students' HL increases. , , Finding time to work with teachers is difficult, as there is an Australian teacher shortage, with teachers experiencing elevated stress and burnout, and many competing demands for teachers' time. With very little research focusing on developing pre‐service teachers' (PST) HL knowledge and capabilities, more effort needs to be directed to initial teacher education (ITE) programs and the development of PST HL knowledge and capabilities. One study in Indonesia has focused on PST and HL, but on HL and the COVID‐19 pandemic, finding that of 704 biology PST, 53% of them identified as having low levels of HL. In the only Australian study focusing on final year PST (the year before PST graduate and are beginning teachers in schools), Kealy‐Ashby et al. found that HL levels, as measured by the Health Literacy Questionnaire (HLQ), were higher across all nine domains for HPE PST when compared with Primary PST, showing medium to very large effect size differences, with all but one domain reporting statistical significance. These large differences in HL knowledge and capabilities were likely due to differences in course and unit offerings and learnings, with HPE PST engaged in at the least four more health education and pedagogy units than Primary PST. Despite the differences between the two sets of PST, focus group interviews emphasised the need for both ITE programs to focus on developing PST individual HL levels and to implement authentic opportunities to practice enhancing students' HL through curriculum using the whole school approach. Further, given that teachers and PST have received a university education that has embedded health education, it could be assumed that the HL of teachers and PST is higher than that of their students and the broader community. However, evidence to support this is lacking. This study aims to identify the HL strengths and challenges of PST at one Australian university across two ITE programs (HPE and Primary) and compare the HL profiles of these PST with that of the Australian general population. It is envisaged that a further understanding of the HL profiles of Australian PST will be able to influence the design of ITE programs and their health education curricular to ensure that PST are able to further develop their HL understandings and capabilities. METHOD 2.1 Study design and setting A cross‐sectional survey to measure HL in Australian PST was conducted using the HLQ in March and April 2022. Paper copies of the HLQ were distributed in tutorial rooms based on the Australian University's campus. The research setting was a university in Sydney, Australia, with two of the University's undergraduate ITE programs (HPE and Primary). Ethics approval was granted by the University's human research ethics team (2022/161). At the time of survey administration, health education and HL was taught as part of both ITE programs. The STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) Statement was followed to ensure the transparent reporting of the study. 2.2 Participants and recruitment This project involved purposively recruiting and sampling PST who were: Enrolled and studying a Bachelor of Primary Education (Primary) or a Bachelor of Health and Physical Education (HPE) program at one Australian University. In their final (fourth) year of study or have completed the number of units of study equivalent to be in their fourth/final year. One of the researchers attended the last 10 min of face‐to‐face tutorials in semester week one to introduce the research project to potential participants. Participants were provided with the participant information statements and participant consent forms, with students asked to complete the consent forms and return them at the end of the discussion (in a drop box). The HLQ was then issued the following week to consenting participants to capture HL knowledge and capabilities. Of the 24 PSTs in the HPE cohort, 23 consented and participated in the HLQ (96%). Of the 70 PSTs in the Primary cohort, 34 consented and completed the HLQ (49%). Demographic data were collected (age, sex). Final year PST were the focus, as the study aimed to determine HL levels of PST after formal learning through the core health education and pedagogy units of study that relate to promoting students' HL were complete. 2.3 Survey tool The study employed the HLQ, a 44‐item multidimensional health literacy assessment tool developed in Australia by Osborne et al. The HLQ has undergone extensive validation testing; it is psychometrically sound and reliably measures an individual's ability to seek, understand and use health information. The selected HLQ assesses HL as a multi‐dimensional concept categorised in nine sections, called domains, rather than producing a single score, hence offering a broad and valid understanding of a person's HL capabilities. This is the best method for measuring HL because it is an efficient measure, providing data across a range of domains, and has been used to collect data from young adults in university settings previously, , , as well as being the survey of choice for the Australian Bureau of Statistics Health Literacy Survey in 2018. The nine domains have been deemed valid for young adults and reliable, with their composite reliability ranging from 0.8 to 0.9. , The HLQ consists of 44 questions across the following nine domains: Feeling understood and supported by healthcare professionals. Having sufficient information to manage my health; Actively managing my health; Social support for health; Appraisal of health information; Ability to actively engage with healthcare professionals; Navigating the healthcare system; Ability to find good health information; Understand health information enough to know what to do. Scale ranges are from 1 to 4 in the first 5 domains and from 1 to 5 in domains 6–9. For domains 1–5, participants answered questions on the scale of strongly disagree to strongly agree (1–4). For domains 6–9, they responded by selecting the options ‘Cannot do or always difficult, usually difficult, sometimes difficult, usually easy, or always easily’. Each item is presented as a statement for which the participant selects the most appropriate response reflecting their current situation. Scores on domains 1–5 reflect participants' beliefs about the resources that can be accessed to manage their own health. Domains 6–9 reflect their beliefs about how easy or difficult tasks depicted in the items were for them to achieve. The researchers used the HLQ under licence agreement from Swinburne University of Technology, Victoria, Australia. 2.4 Statistical analysis Descriptive statistics, including means, standard deviations, were calculated for demographic characteristics (age). We calculated means and 95% confidence intervals for each of the nine HLQ domains for Primary and HPE PST's. We then compared these means to the Australian population means (from the 2018 Health Literacy Survey ) using one‐sample t‐tests. A p ‐value <0.05 was considered statistically significant. Cohen's d was calculated by dividing the mean difference in change by the standard deviation of change for each domain and interpreted the effect sizes as 0.2 (small), 0.5 (medium), and 0.8 (large). All analyses were performed in SAS Enterprise Guide 9.4 (SAS Institute, Cary, NC, USA). Study design and setting A cross‐sectional survey to measure HL in Australian PST was conducted using the HLQ in March and April 2022. Paper copies of the HLQ were distributed in tutorial rooms based on the Australian University's campus. The research setting was a university in Sydney, Australia, with two of the University's undergraduate ITE programs (HPE and Primary). Ethics approval was granted by the University's human research ethics team (2022/161). At the time of survey administration, health education and HL was taught as part of both ITE programs. The STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) Statement was followed to ensure the transparent reporting of the study. Participants and recruitment This project involved purposively recruiting and sampling PST who were: Enrolled and studying a Bachelor of Primary Education (Primary) or a Bachelor of Health and Physical Education (HPE) program at one Australian University. In their final (fourth) year of study or have completed the number of units of study equivalent to be in their fourth/final year. One of the researchers attended the last 10 min of face‐to‐face tutorials in semester week one to introduce the research project to potential participants. Participants were provided with the participant information statements and participant consent forms, with students asked to complete the consent forms and return them at the end of the discussion (in a drop box). The HLQ was then issued the following week to consenting participants to capture HL knowledge and capabilities. Of the 24 PSTs in the HPE cohort, 23 consented and participated in the HLQ (96%). Of the 70 PSTs in the Primary cohort, 34 consented and completed the HLQ (49%). Demographic data were collected (age, sex). Final year PST were the focus, as the study aimed to determine HL levels of PST after formal learning through the core health education and pedagogy units of study that relate to promoting students' HL were complete. Survey tool The study employed the HLQ, a 44‐item multidimensional health literacy assessment tool developed in Australia by Osborne et al. The HLQ has undergone extensive validation testing; it is psychometrically sound and reliably measures an individual's ability to seek, understand and use health information. The selected HLQ assesses HL as a multi‐dimensional concept categorised in nine sections, called domains, rather than producing a single score, hence offering a broad and valid understanding of a person's HL capabilities. This is the best method for measuring HL because it is an efficient measure, providing data across a range of domains, and has been used to collect data from young adults in university settings previously, , , as well as being the survey of choice for the Australian Bureau of Statistics Health Literacy Survey in 2018. The nine domains have been deemed valid for young adults and reliable, with their composite reliability ranging from 0.8 to 0.9. , The HLQ consists of 44 questions across the following nine domains: Feeling understood and supported by healthcare professionals. Having sufficient information to manage my health; Actively managing my health; Social support for health; Appraisal of health information; Ability to actively engage with healthcare professionals; Navigating the healthcare system; Ability to find good health information; Understand health information enough to know what to do. Scale ranges are from 1 to 4 in the first 5 domains and from 1 to 5 in domains 6–9. For domains 1–5, participants answered questions on the scale of strongly disagree to strongly agree (1–4). For domains 6–9, they responded by selecting the options ‘Cannot do or always difficult, usually difficult, sometimes difficult, usually easy, or always easily’. Each item is presented as a statement for which the participant selects the most appropriate response reflecting their current situation. Scores on domains 1–5 reflect participants' beliefs about the resources that can be accessed to manage their own health. Domains 6–9 reflect their beliefs about how easy or difficult tasks depicted in the items were for them to achieve. The researchers used the HLQ under licence agreement from Swinburne University of Technology, Victoria, Australia. Statistical analysis Descriptive statistics, including means, standard deviations, were calculated for demographic characteristics (age). We calculated means and 95% confidence intervals for each of the nine HLQ domains for Primary and HPE PST's. We then compared these means to the Australian population means (from the 2018 Health Literacy Survey ) using one‐sample t‐tests. A p ‐value <0.05 was considered statistically significant. Cohen's d was calculated by dividing the mean difference in change by the standard deviation of change for each domain and interpreted the effect sizes as 0.2 (small), 0.5 (medium), and 0.8 (large). All analyses were performed in SAS Enterprise Guide 9.4 (SAS Institute, Cary, NC, USA). RESULTS The PST participants' median age was 22.3(4.3) years (SD), age ranged from 20 to 54 years, with 36% of participants under the age of 22 years. Of the participant sample, 60% were Primary PST (40% HPE PST). The mean item scores for each of the nine HLQ domains for Primary and HPE PST are presented in Table . Scores for domains 1–5 reflect PST self‐reported assessment of their supports, resources and ability to manage their own health. Domain mean scores ranged from 2.95 to 3.41 (out of a maximum score of 4.00) indicating general agreement with the items associated with these domains. The highest score recorded was for Domain 4 (Social support for my health) and the lowest score recorded was for Domain 5 (Appraisal of health information). Items in domains 6–9 explored PST beliefs about how easy or difficult the tasks described were to achieve at that time. Across these four domains, mean scores ranged from 3.91 to 4.07 (out of a maximum score of 5.00). In general, PST reported that actively engaging with health care providers was usually easy, yet navigating the health care system was at times difficult. 3.1 PST versus general population When comparing the combined PST data with the general population there were two significant differences and medium effect sizes: (1) PST scored significantly higher than the general population for Domain 4 (Social support for health, 3.41 vs. 3.19; p < 0.001; d = 0.57); and (2) the PST scored significantly lower for Domain 9 (Understand health information enough to know what to do, 4.02 vs. 4.27; p < 0.01; d = −0.43). When comparing each of the two PST data sets with the data generated from the general population 18 years and over (see Table ), there were a number of significant differences. Primary PST scored significantly lower than the general population for Domains 5 (Appraisal of health information, 2.74 vs. 2.92; p = 0.04, d = −0.34), 6 (Ability to actively engage with healthcare professionals, 3.83 vs. 4.18; p < 0.01; d = −0.49), 7 (Navigating the healthcare system, 3.6 vs. 4.02; p < 0.01; d = −0.56), 8 (Ability to find good health information, 3.64 vs. 4.09; p < 0.001; d = −0.69), and 9 (Understand health information enough to know what to do, 3.75 v 4.27; p < 0.001; d = −1.03). Medium effect sizes were evident for Domains 5–8, however a very large effect size for Domain 9 was found. This shows that when compared with the Australian general population, primary PST found it very difficult to engage with the health care system and health professionals to be able to determine appropriate health information to take healthy and positive action for their own health and wellbeing. The HPE PST, when compared with the general population, scored significantly higher scores for Domains 1 (Feeling understood and supported by health care providers, 3.45 vs. 3.18; p = 0.04, d = 0.42), 2 (Having sufficient information to manage my health, 3.46 vs. 3.17; p = 0.02, d = 0.47), 3 (Actively managing my health, 3.49 vs. 3.09; p < 0.01, d = 0.65), 4 (Social support for health, 3.59 vs. 3.19; p < 0.001, d = 1.09), 5 (Appraisal of health information, 3.27 vs. 2.92; p < 0.01, d = 0.68), 7 (Navigating the healthcare system, 4.37 vs. 4.02; p = 0.02, d = 0.51) and 8 (Ability to find good health information, 4.45 vs. 4.09; p < 0.01, d = 0.67). Medium effect sizes were evident for Domains 1–3 and 5, 7 and 8, however a very large positive effect size for Domain 4 was found. This showed that HPE PST perceived that they had the knowledge and social resources to manage their own health and felt capable to engage with health systems and personnel to act for the benefit of their own health, with these perceptions scoring greater than the Australian general population. PST versus general population When comparing the combined PST data with the general population there were two significant differences and medium effect sizes: (1) PST scored significantly higher than the general population for Domain 4 (Social support for health, 3.41 vs. 3.19; p < 0.001; d = 0.57); and (2) the PST scored significantly lower for Domain 9 (Understand health information enough to know what to do, 4.02 vs. 4.27; p < 0.01; d = −0.43). When comparing each of the two PST data sets with the data generated from the general population 18 years and over (see Table ), there were a number of significant differences. Primary PST scored significantly lower than the general population for Domains 5 (Appraisal of health information, 2.74 vs. 2.92; p = 0.04, d = −0.34), 6 (Ability to actively engage with healthcare professionals, 3.83 vs. 4.18; p < 0.01; d = −0.49), 7 (Navigating the healthcare system, 3.6 vs. 4.02; p < 0.01; d = −0.56), 8 (Ability to find good health information, 3.64 vs. 4.09; p < 0.001; d = −0.69), and 9 (Understand health information enough to know what to do, 3.75 v 4.27; p < 0.001; d = −1.03). Medium effect sizes were evident for Domains 5–8, however a very large effect size for Domain 9 was found. This shows that when compared with the Australian general population, primary PST found it very difficult to engage with the health care system and health professionals to be able to determine appropriate health information to take healthy and positive action for their own health and wellbeing. The HPE PST, when compared with the general population, scored significantly higher scores for Domains 1 (Feeling understood and supported by health care providers, 3.45 vs. 3.18; p = 0.04, d = 0.42), 2 (Having sufficient information to manage my health, 3.46 vs. 3.17; p = 0.02, d = 0.47), 3 (Actively managing my health, 3.49 vs. 3.09; p < 0.01, d = 0.65), 4 (Social support for health, 3.59 vs. 3.19; p < 0.001, d = 1.09), 5 (Appraisal of health information, 3.27 vs. 2.92; p < 0.01, d = 0.68), 7 (Navigating the healthcare system, 4.37 vs. 4.02; p = 0.02, d = 0.51) and 8 (Ability to find good health information, 4.45 vs. 4.09; p < 0.01, d = 0.67). Medium effect sizes were evident for Domains 1–3 and 5, 7 and 8, however a very large positive effect size for Domain 4 was found. This showed that HPE PST perceived that they had the knowledge and social resources to manage their own health and felt capable to engage with health systems and personnel to act for the benefit of their own health, with these perceptions scoring greater than the Australian general population. DISCUSSION The aim of this study was to identify the HL strengths and challenges of PST at one Australian university across two ITE programs (HPE and primary) and compare the HL profile of these PST with that of the Australian general population. The PST perceived they had the support, resources, and ability to manage their own health (Domains 1–5), with very similar scores to the general population for all except for Domain 4. In Domain 4, PST had significantly greater scores than the general population for accessing and using social support for improving their health. This could mean that PST perceived or received sufficient levels of social resources when they sought emotional or informational help. For Domains 6–9, the PST scored slightly lower than the general population, but this was only statistically significant for Domain 9 (Understand health information enough to know what to do). These findings show that PST find it difficult to analyse and apply health information and that they find it more difficult when compared with the general Australian population. This raises questions for PST's own health and wellbeing, but also highlights that future research needs to determine whether PST are able to utilise their HL knowledge and capabilities to build students' capacity to develop interactive and critical HL capabilities (as required of them by teaching AC:HPE and NSW PDHPE syllabus). We can posit based on recent research with Australian primary and secondary teachers, which shows that teachers' understanding of HL is low including how to teach HPE to enhance students' HL, , , combined with Kealy‐Ashby et al.' findings that PST want more explicit teaching of HL content and opportunities to teach HPE to enhance HL learning during teacher preparation, that more effort needs to be directed to the teaching of health education and HL in ITE programs. It seems that ITE providers and designers need to reconsider the number and content of health education and pedagogy units to prioritise enhancing PST HL levels, understandings and teaching of HPE to enhance students HL levels in relevant and meaningful ways. Kealy‐Ashby et al.' study compared these two groups of PST, the participants in this study, reporting that the HPE PSTs had higher scores for all nine health domains compared with the Primary PSTs, with all but one of the nine domains showing statistically significant differences. The smallest difference was for Domain 1. ‘Feeling understood and supported by healthcare professionals’, which although the smallest, showed a medium effect ( d = 0.50, p = 0.073). The largest effect size, with a very large effect between the two groups, was found for Domain 8. ‘Ability to find good health information’ ( d = 1.42, p = <0.001). The importance of the previous study and this research study is that they both highlight the lower HL levels of Primary PST, compared with PST peers in a HPE program, and now the general Australian adult population. There may be a reason for this difference. Although not involved in a full health degree, the HPE PSTs learn through several health education and pedagogy units, which is common practice across Australian ITE HPE programs indicating a possible reason for the difference; the HPE group have completed five core health education and pedagogy units (equivalent to 30 credit points or 0.375 equivalent full‐time student load [EFTSL]), compared with just one core health education and pedagogy unit for the Primary group which would also be representative across Australian ITE HPE programs (6 credit points or 0.075 EFTSL). , , ITE coordinators need to reconsider the number of core health education and pedagogy units embedded in Primary ITE programs or consider emphasising in existing ITE units, including those focusing on English (literacy) and Mathematics (numeracy), cross‐curricular or learning area integration to ensure that health education and HL can be taught and promoted with other learning areas. This may be important for providing opportunities for PST to be part of authentic teaching and learning experiences that will emulate whole school practices that primary teachers often need to prioritise to overcome an overcrowded curriculum. Reinforcing this is initial evidence developed through the HealthLit4Kids program which showed that primary school teachers planned and implemented classroom activities that integrated at least two learning areas to enhance students' HL. , In this study, we conducted further analyses comparing these two groups of PST with the general Australian population. We found that primary PST had significantly lower scores for domains 5–9 (which are demonstrative of interactive and critical HL), compared with the general Australian population. This is the first study the authors know of that has used the HLQ to measure HL with two PST groups and compared the results with the general population, so it is difficult to compare these findings. However, a recent cross‐sectional study aimed to identify the health literacy of medical students enrolled in an Australian Doctor of Medicine program and compared the results with the general Australian population. Findings reported that the medical students' scores were significantly lower than the Australian general population indicating areas of weakness in their ability to engage with health care providers and to navigate the health system. This is similar to the Primary PST for Domains 6 (Ability to actively engage with health care providers) and 7 (Navigating the health care system). It is possible that both the Primary PST in this study, and the medical students in Lane et al.' study, self‐reported more harshly than the general population. As most of the medical students participating in Lane et al. study were domestic graduate‐entry students studying in the first 2 years of the medical program, and the Primary PST being engaged in one health education and pedagogy unit throughout their ITE program at HLQ administration, it is likely that both groups were aware of what they did know and what they did not know or will need to learn in regards with health and HL. Further, since they are a younger population, they may not have been as exposed to the health care system and health professionals as the general Australian population. This explanation does not support why HPE PST self‐reported higher HLQ scores, but due to engagement in a high number of formal health education learning throughout the 4 years of their ITE program, it is not surprising that they displayed higher HLQ scores than the Primary PST, as well as the medical students in Lane et al. study and the general Australian population for Domains 5–9. The Primary PST results in this study are of concern and highlights two future considerations: (1) the need for Primary ITE programs to increase health education and pedagogy learning opportunities before graduation; and (2) the need for Primary ITE programs to provide learning and assessment opportunities that encourage cross‐curriculum links (e.g., connecting HPE content with Creative Arts) and promotion of HL. Kealy‐Ashby et al.' study interview data reinforces the need to explicitly teach HL to develop PST own HL before focusing on the teaching capabilities (i.e., pedagogies and assessment to promote students' HL). The interview data also advocates for PST to be engaged in learning experiences that teach HL in more realistic situations, ideally integrating health education and HL with other learning areas and practising this planning by teaching school‐aged students. Aligning or using a whole school approach in ITE programs, where cross‐curriculum programs are developed and taught, would be reflective of realistic primary school practices and would be appropriate for enhancing students' HL levels (see ). These recommendations should be a future consideration for ITE providers and programmers. Overall, both HPE and Primary PST need greater opportunities for enhancing their ability to appraise health information (operate at a critical HL level). Specifically, Primary PST need learning experiences that explicitly teach them how to navigate, engage with health services and professionals, and find, understand, analyse, and act on new health information. Without these opportunities and experiences being embedded and practiced in ITE programs and curricular, it is highly likely that PST and teachers will continue to struggle with managing their own health and wellbeing and will be challenged to teach health education to enhance their students' interactive and critical HL levels as shown in previous Australian research. , There are several strengths of the study. First, this is the first study examining two groups of PSTs' HL levels through the HLQ and comparing these results with the general Australian population. Another strength of this study was the use of the validated HLQ, which was developed and tested on the Australian population and as such, was deemed as a highly suitable methodology for this research study. In addition, this survey was utilised by the Australian Bureau of Statistics for the recent 2018 Australian Health Literacy Survey thus providing the researchers with reliable comparative data. However, there are limitations to be acknowledged. For instance, the study does not represent all PST in Australia and was only implemented at one time point and therefore the results of this study cannot be generalised. Another limitation, due to university professional experience placement, was a lower recruitment rate for the HLQ for Primary PST, with 96% of HPE final year students completing the HLQ, compared with 49% for the Primary group. The findings need to be viewed considering the smaller percentage of Primary PSTs participants in this study. CONCLUSION This study has revealed the need for explicit HL learning in ITE programs for PST to establish higher levels of HL and to provide learning opportunities that are authentic, by using whole school approaches and cross‐curriculum opportunities, to practice teaching health education and HL to enhance students' HL. With very little research existing in the ITE and PST space, a systematic review of ITE HL interventions and programs across different health fields, including medicine should be conducted to harness any other strategies that need to be embedded in ITE to enhance PST HL. This may offer broader insights into the strategies and learning experiences that could be embedded in ITE curricular and programs that develop PST HL knowledge and capabilities. A concerted effort is needed to ensure that PST and teachers are able to develop the asset of health literacy for themselves and for their students as this can influence adult health behaviours and inhibit the emergence of NCDs and health inequities in our communities. The authors declare no conflicts of interest. Ethical approval was gained from the University research ethics committee (HREC: 2022/161). Research was conducted in accordance with the National Statement on Ethical Conduct in Human Research (National Health and Medical Research Council, 2018). All participants provided written informed consent. |
Virtual consultations: the experience of oncology and palliative care healthcare professionals | 1e946fff-5083-4064-ad62-e937e7b49679 | 11064317 | Internal Medicine[mh] | During the Covid-19 pandemic, healthcare professionals (HCP) and patients have had to familiarise themselves with virtual consultations (VC) and its accompanying technology, to ensure continuity of care. This was especially important given governmental restriction on free movement/interactions and shielding guidance . There is a growing body of evidence published in the last year, driven by the pandemic, that has encouraged research using VCs in the field of palliative care and oncology. Both specialities can be emotionally demanding for healthcare professionals, with HCPs often managing severely unwell patients. The work involves complex decision making, managing legal and ethical issues, in addition to caring for patients at the end of life . The sudden need for HCPs to translate their usual delivery of care onto a virtual platform, and their experience on the challenges and merits inherent in this change has been explored in this research project. The role that VC modalities may have in the future will be discussed, with the aim of considering if and how VCs can supplement the care provided to patients within these fields in the future. Aims To explore the experience of virtual consultations from the healthcare professional’s perspective in the delivery of oncological and palliative care. Rationale for methodology The study was a cross-sectional mixed methodology observational study using a survey as the data collection tool. To achieve this, an interpretative paradigm was used to understand and discover patterns in the data, which were then analysed via an inductive thematic approach. A mixed methodological approach was chosen consisting of two components within the study: a quantitative method of data collection and qualitative section of thematic analysis. The quantitative data were analysed using descriptive statistics, mainly encompassing the demographic data and participant information relating to the sample. Participants Purposive sampling was used within the study as the sample needed to have a role in a specific field of practice and experience in order to answer the research question . The aim of the study was to look solely at the experience of healthcare professionals within the field of oncology and palliative care regarding virtual consultations. 48.8% of respondents were oncology HCPs and 51.2% were palliative medicine HCPs. 63.2% were working in the hospital setting (of these 85.5% were based in a cancer centre setting), 8% in the hospice setting and 27.6% in the community setting. See Appendix 1 and 3: Inclusion and exclusion criteria and table of results. Recruitment The survey was distributed online using Jisc online surveys® between the 1st of March 2022 and the 1st of April 2022. The link to the online survey was disseminated via the researcher’s personal social media pages on the 1st March 2022, including Facebook® and Twitter®. Furthermore, posters in the local hospitals and hospice settings within healthcare professional areas were used to inform about the survey, and staff were encouraged to invite other colleagues. The survey link was mailed, shared and reposted via online media by professional contacts of the researcher. The post included the link and a banner advertisement. Palliative care and oncology forums, as well as a conference and local grand round were also used to maximise healthcare professional engagement. Data were analysed as surveys came in, and broad themes (see Results) were identified. Themes were categorised and sub-categorised. At the point of reaching 76 surveys, no new themes were emerging, and therefore a decision was made to close the survey at the pre-determined time, without the need for a further extension. All remaining surveys that came in until the closing date were included, and again, no new themes emerged. Enhancing response rate The survey link was also shared via a QR code at a palliative care congress in March 2022 to gain further respondents. Response rate was also enhanced by “re-tweeting” or re-posting the link during the period the survey was open, with the intention of this being further reposted and creating a snowball effect. Survey development and data collection A cross sectional descriptive survey using Jisc online surveys® was developed, with a 20-minute completion time and consisting of three sections. The survey was developed for this study and not published elsewhere. The survey was developed by the first author and refined and edited (see Appendix 2). It was pilot-tested on 3 healthcare professionals and then further augmented, based on this pilot. The first section included the demographics and participant background information. Section two included a variety of question modalities, mainly focusing on the barriers and benefits in the participants’ experience of virtual consultations, and section three further focused on the breaking bad news aspect, relationship factors and future of virtual consultations going forward, with further emphasis on free text responses. The survey included a mixture of multiple-choice questions, dichotomous questions, open-ended questions with free text, Likert scales and rank order questions. Reflexivity Reflexivity is vital within qualitative research and involves the “process of reflecting critically on oneself as a researcher.” A documented anonymised log throughout the study period was kept demonstrating progression and development. As the survey was published on the researchers own social media accounts, it is acknowledged that the viewers and potential participants to the research project would know the researcher, or the researcher’s contacts shared a common set of interests. The benefit of this in obtaining a purposive sample is clear, but also highlights the importance of considering the role of the researcher within the research process . Data analysis Following the closure of the survey, the data were exported to SPSS statistics for analysis. Simple descriptive statistics were used to analyse the quantitative elements of the survey, including the demographic data, closed ended questions, Likert scales, and multiple-choice questions. Percentages were calculated to one decimal point and limited data cleaning occurred to ensure consistency in the documentation of the data. Subgroup analysis was undertaken when relevant to the question in hand, but no subgroup comparison occurred between oncologists and palliative medicine healthcare professionals or between professional roles within this research study, although this could be considered in future work. The written text and comments within the open-ended free text responses were evaluated via an inductive approach, using a reflexive thematic analysis as described by Braun and Clark and themes were derived from the data . The qualitative aspect of data analysis was undertaken manually. The codes were distributed into a coding tree of themes and subthemes, then reviewed and refined at regular intervals. Further validity was gained by reviewing the final themes and subthemes with a supervisor . All surveys were analysed, regardless of their full completion. Non-response to questions was minimal and considered within the data analysis. To explore the experience of virtual consultations from the healthcare professional’s perspective in the delivery of oncological and palliative care. The study was a cross-sectional mixed methodology observational study using a survey as the data collection tool. To achieve this, an interpretative paradigm was used to understand and discover patterns in the data, which were then analysed via an inductive thematic approach. A mixed methodological approach was chosen consisting of two components within the study: a quantitative method of data collection and qualitative section of thematic analysis. The quantitative data were analysed using descriptive statistics, mainly encompassing the demographic data and participant information relating to the sample. Purposive sampling was used within the study as the sample needed to have a role in a specific field of practice and experience in order to answer the research question . The aim of the study was to look solely at the experience of healthcare professionals within the field of oncology and palliative care regarding virtual consultations. 48.8% of respondents were oncology HCPs and 51.2% were palliative medicine HCPs. 63.2% were working in the hospital setting (of these 85.5% were based in a cancer centre setting), 8% in the hospice setting and 27.6% in the community setting. See Appendix 1 and 3: Inclusion and exclusion criteria and table of results. The survey was distributed online using Jisc online surveys® between the 1st of March 2022 and the 1st of April 2022. The link to the online survey was disseminated via the researcher’s personal social media pages on the 1st March 2022, including Facebook® and Twitter®. Furthermore, posters in the local hospitals and hospice settings within healthcare professional areas were used to inform about the survey, and staff were encouraged to invite other colleagues. The survey link was mailed, shared and reposted via online media by professional contacts of the researcher. The post included the link and a banner advertisement. Palliative care and oncology forums, as well as a conference and local grand round were also used to maximise healthcare professional engagement. Data were analysed as surveys came in, and broad themes (see Results) were identified. Themes were categorised and sub-categorised. At the point of reaching 76 surveys, no new themes were emerging, and therefore a decision was made to close the survey at the pre-determined time, without the need for a further extension. All remaining surveys that came in until the closing date were included, and again, no new themes emerged. The survey link was also shared via a QR code at a palliative care congress in March 2022 to gain further respondents. Response rate was also enhanced by “re-tweeting” or re-posting the link during the period the survey was open, with the intention of this being further reposted and creating a snowball effect. A cross sectional descriptive survey using Jisc online surveys® was developed, with a 20-minute completion time and consisting of three sections. The survey was developed for this study and not published elsewhere. The survey was developed by the first author and refined and edited (see Appendix 2). It was pilot-tested on 3 healthcare professionals and then further augmented, based on this pilot. The first section included the demographics and participant background information. Section two included a variety of question modalities, mainly focusing on the barriers and benefits in the participants’ experience of virtual consultations, and section three further focused on the breaking bad news aspect, relationship factors and future of virtual consultations going forward, with further emphasis on free text responses. The survey included a mixture of multiple-choice questions, dichotomous questions, open-ended questions with free text, Likert scales and rank order questions. Reflexivity is vital within qualitative research and involves the “process of reflecting critically on oneself as a researcher.” A documented anonymised log throughout the study period was kept demonstrating progression and development. As the survey was published on the researchers own social media accounts, it is acknowledged that the viewers and potential participants to the research project would know the researcher, or the researcher’s contacts shared a common set of interests. The benefit of this in obtaining a purposive sample is clear, but also highlights the importance of considering the role of the researcher within the research process . Following the closure of the survey, the data were exported to SPSS statistics for analysis. Simple descriptive statistics were used to analyse the quantitative elements of the survey, including the demographic data, closed ended questions, Likert scales, and multiple-choice questions. Percentages were calculated to one decimal point and limited data cleaning occurred to ensure consistency in the documentation of the data. Subgroup analysis was undertaken when relevant to the question in hand, but no subgroup comparison occurred between oncologists and palliative medicine healthcare professionals or between professional roles within this research study, although this could be considered in future work. The written text and comments within the open-ended free text responses were evaluated via an inductive approach, using a reflexive thematic analysis as described by Braun and Clark and themes were derived from the data . The qualitative aspect of data analysis was undertaken manually. The codes were distributed into a coding tree of themes and subthemes, then reviewed and refined at regular intervals. Further validity was gained by reviewing the final themes and subthemes with a supervisor . All surveys were analysed, regardless of their full completion. Non-response to questions was minimal and considered within the data analysis. 87 surveys were submitted in total between the 1st March 2022-1st April 2022. No survey data were excluded. See appendix 3: Table for demographic and quantitative data. Analysis of free text data. Figure below gives an overview of the themes formed following an inductive thematic analysis of the free text comments on the experience of oncology and palliative care health care professionals of using VC modalities. Theme 1: Personal, professional, and familial factors Subtheme: Patient context An awareness of the patient’s context was an important consideration when thinking of the benefits of VC. This related to the context of an unwell patient, who required review, but was too unwell to physically attend a clinical setting. Some patients were too frail to attend in person but benefited from the ability to be reviewed via VC. 53 respondents (61.6%) considered VC’s beneficial when patients had difficulty leaving their home, or felt too unwell or tired to travel. On the other hand, other respondents felt that determining the patients context, or how unwell they were via VC as a modality of communication was quite challenging. They felt the needs of an unwell patient to be quite different to a less unwell patient. “In my experience it is very difficult to assess unwell patients and give psychological support virtually” (P1 Oncology SpR) . The ability for patients who were working to be reviewed within their working day, without the need to take leave from work was another clear benefit of VC, and there was an assertion that VC was very suitable for younger patients, compared with older patients. “Also many patients can do this in between work (for ones who are working); whereas for F2F, they have to take generally the whole day off” (P64 Oncology Consultant) . “Challenges are often patient acceptability- I find the younger age group are more familiar with its use, older group generally not so comfortable” (P68 PC Consultant) . Subtheme: Skill and triage 49 (56.8%) of the participants felt confident consulting virtually but felt that patient uncertainty and skill could be a barrier. One participant felt that their own skill and confidence improved with growing exposure and experience in the VC modality. Similarly, another respondent felt that the patient’s confidence appeared to improve with use. “Patients frequently lack the confidence to be able to use technology to a sufficient level”(P48 PC CNS) . “Competence increases with more use, and the ‘virtual’ barriers reduce considerably” (PC Social worker) . 58 (55.8%) of respondents found that the patients’ lack of confidence with technology was somewhat or very challenging when considering barriers to VC. When considering the skill of breaking bad news, one respondent felt that the skills required when breaking bad news in a VC setting was different to those required when reviewing a patient face to face, highlighting the possible need for increased training and awareness of what is necessary from the professionals perspective. Subtheme: Feelings and perception From the healthcare professional perspective, when considering the benefits of VC, there was a significant emphasis on the benefit of VC on wellbeing and mental health during the uncertain time of the pandemic, as some healthcare workers were shielding because of their own health or vulnerable family members. Participants felt that the patients were more comfortable using VC, being in their own space, own homes, and surroundings and that this in fact put them at ease. Reduced apprehension was felt to positively impact the patient-practitioner relationship. “As someone who shielded with a vulnerable child, being able to continue with seeing my patients made a very real difference to my overall mental well-being at a time of great uncertainty.”(P15 PC Consultant) . “Improves and deepens (the healthcare professional-patient relationship). A sense of a very safe private space to develop a relationship, whilst feeling safe and relaxed in their own home.” (P32 PC Consultant) . One participant was concerned that patients and relatives may feel offended, angry and insulted by the offer of VC, especially when patients were dying. “Patients/family members find it offensive/insensitive and an indication that HCP can’t be bothered to see the dying person” (P68 PC Consultant) . Subtheme: Family When considering the benefits of VC, the majority, 48 (55.8%) of participants did express that the improved ability to ‘meet’ with family members and significant others via VC, to be a slightly or very important benefit to consider. This was considered from the point of view of the visiting restricitions that were in place due to Covid-19, limiting or forbidding visitors to enter the hospital setting, and from the point of view of enabling families from large geographical areas to be part of discussions and reviews. “Multiple family members in multiple locations being able to join in”(P28 PC CNS) . Although the ability to reach families from a wider geographical area for their involvement in consultations was a benefit, others felt that it did bring communication challenges with it. “Its great that multiple family members can attend and participate, but sometimes they sit out of view and contribute and you can’t see them, or see their body language, and therefore it can be more challenging to meaningfully interact with them, like you can when everyone is in the same room”(P45 PC SpR) . Participants also recognised that regular physical hospital appointments for those needing oncological or palliative care can put a momentous strain on family members and carers, and that virtual was perhaps an easier option for ‘joining in’ on a consult. “Reduces burden on carer having to bring patient to clinic”(P40 PC CNS) . Theme 2: Relationships and connection Subtheme: Empowerment Participants regularly mentioned the importance of patient choice, i.e. patients as decision makers who choose the mode of consultation they would like to have. The choice of modality could therefore enable them to undertake a consultation that they were most comfortable with. “They usually appreciate the variety in modes of consultation. This fosters better relationships. Patients need to be assured there are also equal decision makers in having choice.”(P64 Oncology Consultant) . Another participant commented on the power imbalance that can be seen in healthcare, between the professional and patient within the physical space, and by the nature of coming to a clinical environment, how one may feel intimidated. This correlates with previous comments regarding a sense that patients feel more at ease and less anxious using VC modalities. “(Virtual consultation) can support a more egalitarian relationship, patient not coming into ‘my space’ and can reduce potential power imbalance.” (P12 Oncology Psychologist) . Subtheme: Therapeutic relationship The therapeutic relationship between the professional and patient was discussed throughout the data by participants and the ways in which VC can impact on this relationship. The characteristics of a therapeutic relationship referred to in the text involved trust, confidence, empathy, bond, rapport, respect, and touch. Aspects referred to the challenges faced by HCP when needing to console patients, and the inability to do this adequately via VC. They expressed missing this element of patient contact especially when breaking bad news or giving difficult information. “Main challenge has been difficulty reassuring the patient if becoming upset, especially if video breaking up” (P17 Oncology SALT) . ”I perceive a more meaningful relationship with patients following face to face assessment” (P48 PC CNS) . The concept of rapport was reiterated on several occasions by many of the participants, and how consulting virtually impacted on the ability establish and build on rapport and trust. “I think they reduced the ease of building rapport, but they are far better than a mere telephone call, because you get to put a face to a name” (P16 Oncology OT) . Similarly, participants also described a possible feeling of reluctance from patients to ask key questions during a VC, which clearly is a concern when considering the understanding of treatment decisions. This was often exacerbated by technological problems, such as poor signal. “There can be a level of intensity of virtual which may not allow important questions to be asked” (P3 Oncology CNS) . Subtheme: Shared Care The concept of shared care applies to several aspects of the consultation. It can relate to shared care between the MDT, with other sectors of care e.g. the acute sector or primary care, and shared care with wider teams of professionals, and specialities. With VC, links can be sent to other team members or colleagues caring for the patient to attend meetings and discussions to ensure input from all specialties and disciplines involved. 36 (41.3%) of respondents felt the ability to review patients with multiple members of an MDT, to be a slightly or very important benefit of VC. With restrictions in place due to infection control measures during the pandemic, MDT meetings could be held virtually, which participants felt was another benefit of VC. “Ability to continue MDT clinics despite social distancing requirements, ensured patients received the care they needed from full MDT” (P86 Oncology Speech and Language Therapist SALT) . Participants expressed situations where this had worked well with young/transitional patients, where MDT input was required and in initial assessments with Clinical Nurse Specialists (CNSs) and therapist for assessment and review. Similarly, other professionals and specialists were able to join and aid discussions, by ‘dialling in’ and not physically having to join in person, which improved. “Can review inpatients in other hospitals much more efficiently, so can support acute care” (P47 Oncology Consultant) . Subtheme: Difficult conversations When considering participant experience of breaking bad news in a VC, 37(64.9%) felt negative towards the process, 13 (22.8%) had mixed feelings, and 7(12.2%) felt positive about their experiences. Figure summarises a number of the key phrases as words used to describe the process of breaking bad news using VC. When participants were asked which method of consulting works best for them, when breaking bad news, 79 (100%) of the 79 respondents expressed that they would nearly always prefer face to face consultations. Participants felt that using a VC modality to break bad news was better than on the phone but would generally prefer to undertake this task face to face, indicating a preference hierarchy. Some expressed that bad news consultations were all undertaken face to face in their place of work, and VC modalities were never used for this purpose, indirectly highlighting the inappropriateness felt by some HCPs of using VC to have such discussions. Participants also emphasised the importance of triage to determine which patients would accept difficult conversations via VC. “The key in my mind is to choose mode of consultations personalised to the patient in discussion, and not have a blanket rule to follow blindly” (P64 Oncology Consultant) . Several participants expressed that breaking bad news using VC is more appropriate with patients already known to the HCP i.e. where a prior relationship had already been formed. “For patients I already know well and have a good rapport with, it has been acceptable, I think. I would be very hesitant in embarking on this if I did not know the patient or the situation well.” (P39 Oncology Consultant) . Participants expressed the challenges with communication when breaking bad news virtually and aspects of commuication skills that are lost in this situation. “Harder to pick up on cues from them, as to pace of information gathering. If they are on their own, virtual is more isolating if breaking bad news. Face to face one can allow for silence more readily”(P9 PC Consultant) . Technological and internet connection issues also played a role in exacerbating the challenges, causing delays, screens freezing and with responses to questions postponed due to auditory problems or feedback. “ I had one very challenging consultation where the reception was so bad the patient ended up walking off in frustration, very hard to rectify” (P83 Oncology SALT) . Subtheme: Dialogue Given that VC is a mode of consultation and communication, it is unsurprising that a significant proportion of respondents commented on their experiences on that specific theme, highlighting the benefits and challenges inherent within VC communicating. Participants acknowledged the importance and the difficulties with not being able to pick up nonverbal cues and sense body language when consulting virtually and its impact on effective communication, listening and interpreting. This may be due to technological issues and camera placement, the quality of the camera, distance from the patient and technological delays. On the other hand, some participants also felt that VC was better than telephone consultations, as patients were able to lip read and read their nonverbal cues. Others highlighted the importance of considering visual or hearing barriers that the patient may have which might reduce quality of virtual communications. “Communication barrier not being able to see full body language and not being able to comfort in person.”(P33 Oncology SALT) . Subtheme: Training 29 (33.3%) had training prior to conducting VCs, 58 of the 87 respondents (66.7%) did not. Only 18 participants (20.6%) felt that lack of training in VC had caused challenges when consulting. The training varied from colleague and self-directed training, to formal video tutorials and training via the trusts. The length of training varied from ten minutes to one hour. 25 (92.5%) out of the 27 felt that the training was of benefit to them. Participants described the training to be helpful or somewhat useful. Having training enabled them to support, encourage and train others, including how to use the VC platform. “Yes, helped build confidence with kit and applications, as well as giving tips for making it work for patients” (P57 PC Consultant) . Participants reflected on the challenges in inviting trainees into virtual clinics. “I haven’t probably invited junior doctors/trainees to join me in clinic as much as I did previously, as harder to include them on the screen and also means I’d need to wear a mask which I feel would make the video consultation harder for patient.” (P29 PC Consultant) . The above observation was made at a time during the Covid-19 pandemic when mask wearing was essential in shared indoor spaces. Subtheme: Prior interaction This subtheme relates to the perception that VC was often a less troubled option in patients with whom the HCP had held a prior face-to-face consultation with. Participants expressed that VCs were harder with new patients they had never met face-to-face. This was very much a recurring theme in the data. “I think it takes more time to establish a connection and there is definitely something missing if you’ve never met the person in person . Consultations have been much more effective on patients that I have met at least once in clinic” (P31 PC Consultant) . Similarly, this impact was seen within communication and breaking bad news situations, participants expressing a reluctance to risk having virtual conversations, without a prior real-world relationship with the patient. “Using VC for this is more appropriate with patients already known to me” (P2 PC Consultant) . Several participants alluded to the benefit of seeing the patients using several modes of consultation, but the initial consultation would be better conducted ‘in person’, to establish the initial relationship and build rapport, followed by VC consultations for further follow-up. “Much harder to establish relationship in the new patient consultation. However, adds flexibility when used appropriately interspersed with face-to-face consultations.” (P59 Oncology SpR) . Theme 3: Logistical and practical implications Subtheme: The review Table in appendix 4 summarises the assessments undertaken via VC by the various HCP roles. When asked about the role of VC during the delivery of varying stages of patient management, a significant proportion of the participants felt that VC had a role in patients that required routine follow up, if the patient was well and stable, and if they had already met the patient and had an established relationship with them. Participants felt that VC was appropriate pre-systemic anti-cancer treatment (SACT) review and whilst undergoing SACT, in addition to radiotherapy review consultations to assess side effects. On the other hand, some participants felt that “any difficult discussions should by default be in person in my opinion” (P60 Oncology SpR) and “diagnosis and change in treatment plans i.e. from treatments to best supportive care should be face to face were at all possible” P66 (Oncology CNS) . Subtheme: Accessibility Participants commented that the use of VC increases patients’ ability to access palliative care services and advice. Challenges with geographical and physical distance can be overcome in this way, and patients that live a distance from acute hospitals can benefit from specialist input regardless of where they live. In addition to this, as demonstrated above, the use of VC allows increased access to family members/significant others/carers to join in with consultations and discussions, even when they are at work or elsewhere. “They will be a fundamental part of service delivery. They widen access to our services. I think they have a role in all stages of the patient journey, however they can be inappropriate for individuals at all stages too. Decisions regarding their use must be individualised” (P45 PC SpR) . Subtheme: Infection Prevention In terms of the logistical impact of infection control, as described in the literature review, the role the coronavirus pandemic has played in the requirement and implementation of the use of VC in healthcare as a mode to review patients can not be underestimated in allowing shielding patients to be reviewed at a time of significant anxiety to themselves and their families. “It has been beneficial to reduce footfall in the cancer centre and to protect patients during the peaks of the pandemic.” (P16 Oncology OT) . Subtheme: Clinical impact Some participants reported difficulty in arranging prescriptions for patients using VC. N = 23, (27.3%) had experienced challenges with arranging prescriptions, and of these respondents ( n = 74) were doctors, specialist nurses and pharmacists- therefore likely prescribers. One participant described the inability “to issue prescriptions there and then is very frustrating and time consuming” (P67 Palliative care registrar) and the need to improve accessibility to electronic prescription services. Several participants described their experience and concerns in missing clinical signs using VC modalities. Some participants felt that they were unable to grasp how well the patient was on a screen, impacting on the holistic clinical picture and decision making. N = 56 (65%) of respondents found the inability to examine patients via VC quite challening or very challenging, and of the cohort of doctors surveyed ( n = 51), n = 37 (72.5%) found the inability to examine patients challenging to some degree. Lack of examination also exacerbated patient anxiety and increased need for supportive calls and reassurance. “Virtual appointments result in missing important clinical signs or changes in patient condition. Need to see these patients face to face to get a good sense of how they are doing.” (P5 Oncology consultant) . Subtheme: Environment Some participants felt that the impact of their working environment negatively affected their experience of VC. This was related to challenges with privacy, lack of physical dedicated space to undertake a VC and having multiple staff members in one room whilst trying to consult with a patient. Others described challenges with background noise in an open plan office space and interruptions. “Consultant hospice colleagues lurking off camera in consultation and then popping out halfway through.” (P27 PC Consultant) . An occupational therapist discussed the benefits of VC in having the ability to see the patients home and what was needed without having to physically assess, and that this enabled quicker processing of equipment. “Able to assess home environment to select appropriate equipment, reduced time to wait for equipment.” (P4 PC OT) . Subtheme: Time 45 of the participants (52.3%) felt that the ability to review patients quicker and with less notice was a slightly or very important factor to consider when undertaking VC. Partcipants felt that consultations and meetings could be arranged with shorter notice, including less need to book hospital transport or send out letters, which increased flexibility for the patient. “It is more efficient in contacting people sooner and saves travelling times and costs” (P44 PC CNS) . “Phone at a time more convenient to patient for them” (P55 PC Social worker) . Some participants felt that clinics were more efficient and quicker, with less delays where others felt that VC was more time consuming than telephone consultations due to the need to set up IT and delays related to technology. “Sessions with patients were shorter and more time efficient therefore long clinics were completed quicker”(P26 Oncology SALT) . Subtheme: Technology Participants were asked about their experience relating to the availability of appropriate access to equipment required to conduct VCs. N = 52 (59.8%) expressed that they did have access to appropriate equipment. Others felt that lack or insufficient equipment was a barrier in their experience of consulting virtually. This was related to desktop space, computer/laptop unavailability, headsets (with headphone and microphone), and several MDT members congregating in one space attempting to access a single desktop. In addition to this n = 39 (45.3%) of respondents experienced technical issues with the consulting programme and n = 39 (44.8%) had issues with poor internet connection. “However, if experiencing a bad internet connection, it can feel frustrating and quite a remote relationship” (P21 PC Consultant) . Participants explained that rural areas had difficulty with poor internet connectivity and felt that further work needed to be undertaken to improve internet connections prior to undertaking or wholly relying on VC. Subtheme: Travel One of the significant benefits participants discussed in their responses was the benefit of VC in reducing the need for travel. This was applicable to the HCP’s themselves and their patients/proxy. Over half, n = 48 (55.1%) of respondents considered the reduced travel time for patients and HCP to be a slightly or very important benefit of VC. For HCPs working in the community, VC meant that staff spent less time travelling around, and one oncology consultant felt that because patients were not travelling, they were able to accommodate more patients in one clinic. It was felt that if the message from the consultation was clear and uncomplicated, travelling long distances can be avoided, in addition to avoiding the burden of parking and waiting. “Virtual consultations can be useful in order to avoid travelling especially when the message is straightforward, such as a favourable follow-up scan” (P53 Oncology Consultant) . Subtheme: Patient context An awareness of the patient’s context was an important consideration when thinking of the benefits of VC. This related to the context of an unwell patient, who required review, but was too unwell to physically attend a clinical setting. Some patients were too frail to attend in person but benefited from the ability to be reviewed via VC. 53 respondents (61.6%) considered VC’s beneficial when patients had difficulty leaving their home, or felt too unwell or tired to travel. On the other hand, other respondents felt that determining the patients context, or how unwell they were via VC as a modality of communication was quite challenging. They felt the needs of an unwell patient to be quite different to a less unwell patient. “In my experience it is very difficult to assess unwell patients and give psychological support virtually” (P1 Oncology SpR) . The ability for patients who were working to be reviewed within their working day, without the need to take leave from work was another clear benefit of VC, and there was an assertion that VC was very suitable for younger patients, compared with older patients. “Also many patients can do this in between work (for ones who are working); whereas for F2F, they have to take generally the whole day off” (P64 Oncology Consultant) . “Challenges are often patient acceptability- I find the younger age group are more familiar with its use, older group generally not so comfortable” (P68 PC Consultant) . Subtheme: Skill and triage 49 (56.8%) of the participants felt confident consulting virtually but felt that patient uncertainty and skill could be a barrier. One participant felt that their own skill and confidence improved with growing exposure and experience in the VC modality. Similarly, another respondent felt that the patient’s confidence appeared to improve with use. “Patients frequently lack the confidence to be able to use technology to a sufficient level”(P48 PC CNS) . “Competence increases with more use, and the ‘virtual’ barriers reduce considerably” (PC Social worker) . 58 (55.8%) of respondents found that the patients’ lack of confidence with technology was somewhat or very challenging when considering barriers to VC. When considering the skill of breaking bad news, one respondent felt that the skills required when breaking bad news in a VC setting was different to those required when reviewing a patient face to face, highlighting the possible need for increased training and awareness of what is necessary from the professionals perspective. Subtheme: Feelings and perception From the healthcare professional perspective, when considering the benefits of VC, there was a significant emphasis on the benefit of VC on wellbeing and mental health during the uncertain time of the pandemic, as some healthcare workers were shielding because of their own health or vulnerable family members. Participants felt that the patients were more comfortable using VC, being in their own space, own homes, and surroundings and that this in fact put them at ease. Reduced apprehension was felt to positively impact the patient-practitioner relationship. “As someone who shielded with a vulnerable child, being able to continue with seeing my patients made a very real difference to my overall mental well-being at a time of great uncertainty.”(P15 PC Consultant) . “Improves and deepens (the healthcare professional-patient relationship). A sense of a very safe private space to develop a relationship, whilst feeling safe and relaxed in their own home.” (P32 PC Consultant) . One participant was concerned that patients and relatives may feel offended, angry and insulted by the offer of VC, especially when patients were dying. “Patients/family members find it offensive/insensitive and an indication that HCP can’t be bothered to see the dying person” (P68 PC Consultant) . Subtheme: Family When considering the benefits of VC, the majority, 48 (55.8%) of participants did express that the improved ability to ‘meet’ with family members and significant others via VC, to be a slightly or very important benefit to consider. This was considered from the point of view of the visiting restricitions that were in place due to Covid-19, limiting or forbidding visitors to enter the hospital setting, and from the point of view of enabling families from large geographical areas to be part of discussions and reviews. “Multiple family members in multiple locations being able to join in”(P28 PC CNS) . Although the ability to reach families from a wider geographical area for their involvement in consultations was a benefit, others felt that it did bring communication challenges with it. “Its great that multiple family members can attend and participate, but sometimes they sit out of view and contribute and you can’t see them, or see their body language, and therefore it can be more challenging to meaningfully interact with them, like you can when everyone is in the same room”(P45 PC SpR) . Participants also recognised that regular physical hospital appointments for those needing oncological or palliative care can put a momentous strain on family members and carers, and that virtual was perhaps an easier option for ‘joining in’ on a consult. “Reduces burden on carer having to bring patient to clinic”(P40 PC CNS) . An awareness of the patient’s context was an important consideration when thinking of the benefits of VC. This related to the context of an unwell patient, who required review, but was too unwell to physically attend a clinical setting. Some patients were too frail to attend in person but benefited from the ability to be reviewed via VC. 53 respondents (61.6%) considered VC’s beneficial when patients had difficulty leaving their home, or felt too unwell or tired to travel. On the other hand, other respondents felt that determining the patients context, or how unwell they were via VC as a modality of communication was quite challenging. They felt the needs of an unwell patient to be quite different to a less unwell patient. “In my experience it is very difficult to assess unwell patients and give psychological support virtually” (P1 Oncology SpR) . The ability for patients who were working to be reviewed within their working day, without the need to take leave from work was another clear benefit of VC, and there was an assertion that VC was very suitable for younger patients, compared with older patients. “Also many patients can do this in between work (for ones who are working); whereas for F2F, they have to take generally the whole day off” (P64 Oncology Consultant) . “Challenges are often patient acceptability- I find the younger age group are more familiar with its use, older group generally not so comfortable” (P68 PC Consultant) . 49 (56.8%) of the participants felt confident consulting virtually but felt that patient uncertainty and skill could be a barrier. One participant felt that their own skill and confidence improved with growing exposure and experience in the VC modality. Similarly, another respondent felt that the patient’s confidence appeared to improve with use. “Patients frequently lack the confidence to be able to use technology to a sufficient level”(P48 PC CNS) . “Competence increases with more use, and the ‘virtual’ barriers reduce considerably” (PC Social worker) . 58 (55.8%) of respondents found that the patients’ lack of confidence with technology was somewhat or very challenging when considering barriers to VC. When considering the skill of breaking bad news, one respondent felt that the skills required when breaking bad news in a VC setting was different to those required when reviewing a patient face to face, highlighting the possible need for increased training and awareness of what is necessary from the professionals perspective. From the healthcare professional perspective, when considering the benefits of VC, there was a significant emphasis on the benefit of VC on wellbeing and mental health during the uncertain time of the pandemic, as some healthcare workers were shielding because of their own health or vulnerable family members. Participants felt that the patients were more comfortable using VC, being in their own space, own homes, and surroundings and that this in fact put them at ease. Reduced apprehension was felt to positively impact the patient-practitioner relationship. “As someone who shielded with a vulnerable child, being able to continue with seeing my patients made a very real difference to my overall mental well-being at a time of great uncertainty.”(P15 PC Consultant) . “Improves and deepens (the healthcare professional-patient relationship). A sense of a very safe private space to develop a relationship, whilst feeling safe and relaxed in their own home.” (P32 PC Consultant) . One participant was concerned that patients and relatives may feel offended, angry and insulted by the offer of VC, especially when patients were dying. “Patients/family members find it offensive/insensitive and an indication that HCP can’t be bothered to see the dying person” (P68 PC Consultant) . When considering the benefits of VC, the majority, 48 (55.8%) of participants did express that the improved ability to ‘meet’ with family members and significant others via VC, to be a slightly or very important benefit to consider. This was considered from the point of view of the visiting restricitions that were in place due to Covid-19, limiting or forbidding visitors to enter the hospital setting, and from the point of view of enabling families from large geographical areas to be part of discussions and reviews. “Multiple family members in multiple locations being able to join in”(P28 PC CNS) . Although the ability to reach families from a wider geographical area for their involvement in consultations was a benefit, others felt that it did bring communication challenges with it. “Its great that multiple family members can attend and participate, but sometimes they sit out of view and contribute and you can’t see them, or see their body language, and therefore it can be more challenging to meaningfully interact with them, like you can when everyone is in the same room”(P45 PC SpR) . Participants also recognised that regular physical hospital appointments for those needing oncological or palliative care can put a momentous strain on family members and carers, and that virtual was perhaps an easier option for ‘joining in’ on a consult. “Reduces burden on carer having to bring patient to clinic”(P40 PC CNS) . Subtheme: Empowerment Participants regularly mentioned the importance of patient choice, i.e. patients as decision makers who choose the mode of consultation they would like to have. The choice of modality could therefore enable them to undertake a consultation that they were most comfortable with. “They usually appreciate the variety in modes of consultation. This fosters better relationships. Patients need to be assured there are also equal decision makers in having choice.”(P64 Oncology Consultant) . Another participant commented on the power imbalance that can be seen in healthcare, between the professional and patient within the physical space, and by the nature of coming to a clinical environment, how one may feel intimidated. This correlates with previous comments regarding a sense that patients feel more at ease and less anxious using VC modalities. “(Virtual consultation) can support a more egalitarian relationship, patient not coming into ‘my space’ and can reduce potential power imbalance.” (P12 Oncology Psychologist) . Subtheme: Therapeutic relationship The therapeutic relationship between the professional and patient was discussed throughout the data by participants and the ways in which VC can impact on this relationship. The characteristics of a therapeutic relationship referred to in the text involved trust, confidence, empathy, bond, rapport, respect, and touch. Aspects referred to the challenges faced by HCP when needing to console patients, and the inability to do this adequately via VC. They expressed missing this element of patient contact especially when breaking bad news or giving difficult information. “Main challenge has been difficulty reassuring the patient if becoming upset, especially if video breaking up” (P17 Oncology SALT) . ”I perceive a more meaningful relationship with patients following face to face assessment” (P48 PC CNS) . The concept of rapport was reiterated on several occasions by many of the participants, and how consulting virtually impacted on the ability establish and build on rapport and trust. “I think they reduced the ease of building rapport, but they are far better than a mere telephone call, because you get to put a face to a name” (P16 Oncology OT) . Similarly, participants also described a possible feeling of reluctance from patients to ask key questions during a VC, which clearly is a concern when considering the understanding of treatment decisions. This was often exacerbated by technological problems, such as poor signal. “There can be a level of intensity of virtual which may not allow important questions to be asked” (P3 Oncology CNS) . Subtheme: Shared Care The concept of shared care applies to several aspects of the consultation. It can relate to shared care between the MDT, with other sectors of care e.g. the acute sector or primary care, and shared care with wider teams of professionals, and specialities. With VC, links can be sent to other team members or colleagues caring for the patient to attend meetings and discussions to ensure input from all specialties and disciplines involved. 36 (41.3%) of respondents felt the ability to review patients with multiple members of an MDT, to be a slightly or very important benefit of VC. With restrictions in place due to infection control measures during the pandemic, MDT meetings could be held virtually, which participants felt was another benefit of VC. “Ability to continue MDT clinics despite social distancing requirements, ensured patients received the care they needed from full MDT” (P86 Oncology Speech and Language Therapist SALT) . Participants expressed situations where this had worked well with young/transitional patients, where MDT input was required and in initial assessments with Clinical Nurse Specialists (CNSs) and therapist for assessment and review. Similarly, other professionals and specialists were able to join and aid discussions, by ‘dialling in’ and not physically having to join in person, which improved. “Can review inpatients in other hospitals much more efficiently, so can support acute care” (P47 Oncology Consultant) . Subtheme: Difficult conversations When considering participant experience of breaking bad news in a VC, 37(64.9%) felt negative towards the process, 13 (22.8%) had mixed feelings, and 7(12.2%) felt positive about their experiences. Figure summarises a number of the key phrases as words used to describe the process of breaking bad news using VC. When participants were asked which method of consulting works best for them, when breaking bad news, 79 (100%) of the 79 respondents expressed that they would nearly always prefer face to face consultations. Participants felt that using a VC modality to break bad news was better than on the phone but would generally prefer to undertake this task face to face, indicating a preference hierarchy. Some expressed that bad news consultations were all undertaken face to face in their place of work, and VC modalities were never used for this purpose, indirectly highlighting the inappropriateness felt by some HCPs of using VC to have such discussions. Participants also emphasised the importance of triage to determine which patients would accept difficult conversations via VC. “The key in my mind is to choose mode of consultations personalised to the patient in discussion, and not have a blanket rule to follow blindly” (P64 Oncology Consultant) . Several participants expressed that breaking bad news using VC is more appropriate with patients already known to the HCP i.e. where a prior relationship had already been formed. “For patients I already know well and have a good rapport with, it has been acceptable, I think. I would be very hesitant in embarking on this if I did not know the patient or the situation well.” (P39 Oncology Consultant) . Participants expressed the challenges with communication when breaking bad news virtually and aspects of commuication skills that are lost in this situation. “Harder to pick up on cues from them, as to pace of information gathering. If they are on their own, virtual is more isolating if breaking bad news. Face to face one can allow for silence more readily”(P9 PC Consultant) . Technological and internet connection issues also played a role in exacerbating the challenges, causing delays, screens freezing and with responses to questions postponed due to auditory problems or feedback. “ I had one very challenging consultation where the reception was so bad the patient ended up walking off in frustration, very hard to rectify” (P83 Oncology SALT) . Subtheme: Dialogue Given that VC is a mode of consultation and communication, it is unsurprising that a significant proportion of respondents commented on their experiences on that specific theme, highlighting the benefits and challenges inherent within VC communicating. Participants acknowledged the importance and the difficulties with not being able to pick up nonverbal cues and sense body language when consulting virtually and its impact on effective communication, listening and interpreting. This may be due to technological issues and camera placement, the quality of the camera, distance from the patient and technological delays. On the other hand, some participants also felt that VC was better than telephone consultations, as patients were able to lip read and read their nonverbal cues. Others highlighted the importance of considering visual or hearing barriers that the patient may have which might reduce quality of virtual communications. “Communication barrier not being able to see full body language and not being able to comfort in person.”(P33 Oncology SALT) . Subtheme: Training 29 (33.3%) had training prior to conducting VCs, 58 of the 87 respondents (66.7%) did not. Only 18 participants (20.6%) felt that lack of training in VC had caused challenges when consulting. The training varied from colleague and self-directed training, to formal video tutorials and training via the trusts. The length of training varied from ten minutes to one hour. 25 (92.5%) out of the 27 felt that the training was of benefit to them. Participants described the training to be helpful or somewhat useful. Having training enabled them to support, encourage and train others, including how to use the VC platform. “Yes, helped build confidence with kit and applications, as well as giving tips for making it work for patients” (P57 PC Consultant) . Participants reflected on the challenges in inviting trainees into virtual clinics. “I haven’t probably invited junior doctors/trainees to join me in clinic as much as I did previously, as harder to include them on the screen and also means I’d need to wear a mask which I feel would make the video consultation harder for patient.” (P29 PC Consultant) . The above observation was made at a time during the Covid-19 pandemic when mask wearing was essential in shared indoor spaces. Subtheme: Prior interaction This subtheme relates to the perception that VC was often a less troubled option in patients with whom the HCP had held a prior face-to-face consultation with. Participants expressed that VCs were harder with new patients they had never met face-to-face. This was very much a recurring theme in the data. “I think it takes more time to establish a connection and there is definitely something missing if you’ve never met the person in person . Consultations have been much more effective on patients that I have met at least once in clinic” (P31 PC Consultant) . Similarly, this impact was seen within communication and breaking bad news situations, participants expressing a reluctance to risk having virtual conversations, without a prior real-world relationship with the patient. “Using VC for this is more appropriate with patients already known to me” (P2 PC Consultant) . Several participants alluded to the benefit of seeing the patients using several modes of consultation, but the initial consultation would be better conducted ‘in person’, to establish the initial relationship and build rapport, followed by VC consultations for further follow-up. “Much harder to establish relationship in the new patient consultation. However, adds flexibility when used appropriately interspersed with face-to-face consultations.” (P59 Oncology SpR) . Participants regularly mentioned the importance of patient choice, i.e. patients as decision makers who choose the mode of consultation they would like to have. The choice of modality could therefore enable them to undertake a consultation that they were most comfortable with. “They usually appreciate the variety in modes of consultation. This fosters better relationships. Patients need to be assured there are also equal decision makers in having choice.”(P64 Oncology Consultant) . Another participant commented on the power imbalance that can be seen in healthcare, between the professional and patient within the physical space, and by the nature of coming to a clinical environment, how one may feel intimidated. This correlates with previous comments regarding a sense that patients feel more at ease and less anxious using VC modalities. “(Virtual consultation) can support a more egalitarian relationship, patient not coming into ‘my space’ and can reduce potential power imbalance.” (P12 Oncology Psychologist) . The therapeutic relationship between the professional and patient was discussed throughout the data by participants and the ways in which VC can impact on this relationship. The characteristics of a therapeutic relationship referred to in the text involved trust, confidence, empathy, bond, rapport, respect, and touch. Aspects referred to the challenges faced by HCP when needing to console patients, and the inability to do this adequately via VC. They expressed missing this element of patient contact especially when breaking bad news or giving difficult information. “Main challenge has been difficulty reassuring the patient if becoming upset, especially if video breaking up” (P17 Oncology SALT) . ”I perceive a more meaningful relationship with patients following face to face assessment” (P48 PC CNS) . The concept of rapport was reiterated on several occasions by many of the participants, and how consulting virtually impacted on the ability establish and build on rapport and trust. “I think they reduced the ease of building rapport, but they are far better than a mere telephone call, because you get to put a face to a name” (P16 Oncology OT) . Similarly, participants also described a possible feeling of reluctance from patients to ask key questions during a VC, which clearly is a concern when considering the understanding of treatment decisions. This was often exacerbated by technological problems, such as poor signal. “There can be a level of intensity of virtual which may not allow important questions to be asked” (P3 Oncology CNS) . The concept of shared care applies to several aspects of the consultation. It can relate to shared care between the MDT, with other sectors of care e.g. the acute sector or primary care, and shared care with wider teams of professionals, and specialities. With VC, links can be sent to other team members or colleagues caring for the patient to attend meetings and discussions to ensure input from all specialties and disciplines involved. 36 (41.3%) of respondents felt the ability to review patients with multiple members of an MDT, to be a slightly or very important benefit of VC. With restrictions in place due to infection control measures during the pandemic, MDT meetings could be held virtually, which participants felt was another benefit of VC. “Ability to continue MDT clinics despite social distancing requirements, ensured patients received the care they needed from full MDT” (P86 Oncology Speech and Language Therapist SALT) . Participants expressed situations where this had worked well with young/transitional patients, where MDT input was required and in initial assessments with Clinical Nurse Specialists (CNSs) and therapist for assessment and review. Similarly, other professionals and specialists were able to join and aid discussions, by ‘dialling in’ and not physically having to join in person, which improved. “Can review inpatients in other hospitals much more efficiently, so can support acute care” (P47 Oncology Consultant) . When considering participant experience of breaking bad news in a VC, 37(64.9%) felt negative towards the process, 13 (22.8%) had mixed feelings, and 7(12.2%) felt positive about their experiences. Figure summarises a number of the key phrases as words used to describe the process of breaking bad news using VC. When participants were asked which method of consulting works best for them, when breaking bad news, 79 (100%) of the 79 respondents expressed that they would nearly always prefer face to face consultations. Participants felt that using a VC modality to break bad news was better than on the phone but would generally prefer to undertake this task face to face, indicating a preference hierarchy. Some expressed that bad news consultations were all undertaken face to face in their place of work, and VC modalities were never used for this purpose, indirectly highlighting the inappropriateness felt by some HCPs of using VC to have such discussions. Participants also emphasised the importance of triage to determine which patients would accept difficult conversations via VC. “The key in my mind is to choose mode of consultations personalised to the patient in discussion, and not have a blanket rule to follow blindly” (P64 Oncology Consultant) . Several participants expressed that breaking bad news using VC is more appropriate with patients already known to the HCP i.e. where a prior relationship had already been formed. “For patients I already know well and have a good rapport with, it has been acceptable, I think. I would be very hesitant in embarking on this if I did not know the patient or the situation well.” (P39 Oncology Consultant) . Participants expressed the challenges with communication when breaking bad news virtually and aspects of commuication skills that are lost in this situation. “Harder to pick up on cues from them, as to pace of information gathering. If they are on their own, virtual is more isolating if breaking bad news. Face to face one can allow for silence more readily”(P9 PC Consultant) . Technological and internet connection issues also played a role in exacerbating the challenges, causing delays, screens freezing and with responses to questions postponed due to auditory problems or feedback. “ I had one very challenging consultation where the reception was so bad the patient ended up walking off in frustration, very hard to rectify” (P83 Oncology SALT) . Given that VC is a mode of consultation and communication, it is unsurprising that a significant proportion of respondents commented on their experiences on that specific theme, highlighting the benefits and challenges inherent within VC communicating. Participants acknowledged the importance and the difficulties with not being able to pick up nonverbal cues and sense body language when consulting virtually and its impact on effective communication, listening and interpreting. This may be due to technological issues and camera placement, the quality of the camera, distance from the patient and technological delays. On the other hand, some participants also felt that VC was better than telephone consultations, as patients were able to lip read and read their nonverbal cues. Others highlighted the importance of considering visual or hearing barriers that the patient may have which might reduce quality of virtual communications. “Communication barrier not being able to see full body language and not being able to comfort in person.”(P33 Oncology SALT) . 29 (33.3%) had training prior to conducting VCs, 58 of the 87 respondents (66.7%) did not. Only 18 participants (20.6%) felt that lack of training in VC had caused challenges when consulting. The training varied from colleague and self-directed training, to formal video tutorials and training via the trusts. The length of training varied from ten minutes to one hour. 25 (92.5%) out of the 27 felt that the training was of benefit to them. Participants described the training to be helpful or somewhat useful. Having training enabled them to support, encourage and train others, including how to use the VC platform. “Yes, helped build confidence with kit and applications, as well as giving tips for making it work for patients” (P57 PC Consultant) . Participants reflected on the challenges in inviting trainees into virtual clinics. “I haven’t probably invited junior doctors/trainees to join me in clinic as much as I did previously, as harder to include them on the screen and also means I’d need to wear a mask which I feel would make the video consultation harder for patient.” (P29 PC Consultant) . The above observation was made at a time during the Covid-19 pandemic when mask wearing was essential in shared indoor spaces. This subtheme relates to the perception that VC was often a less troubled option in patients with whom the HCP had held a prior face-to-face consultation with. Participants expressed that VCs were harder with new patients they had never met face-to-face. This was very much a recurring theme in the data. “I think it takes more time to establish a connection and there is definitely something missing if you’ve never met the person in person . Consultations have been much more effective on patients that I have met at least once in clinic” (P31 PC Consultant) . Similarly, this impact was seen within communication and breaking bad news situations, participants expressing a reluctance to risk having virtual conversations, without a prior real-world relationship with the patient. “Using VC for this is more appropriate with patients already known to me” (P2 PC Consultant) . Several participants alluded to the benefit of seeing the patients using several modes of consultation, but the initial consultation would be better conducted ‘in person’, to establish the initial relationship and build rapport, followed by VC consultations for further follow-up. “Much harder to establish relationship in the new patient consultation. However, adds flexibility when used appropriately interspersed with face-to-face consultations.” (P59 Oncology SpR) . Subtheme: The review Table in appendix 4 summarises the assessments undertaken via VC by the various HCP roles. When asked about the role of VC during the delivery of varying stages of patient management, a significant proportion of the participants felt that VC had a role in patients that required routine follow up, if the patient was well and stable, and if they had already met the patient and had an established relationship with them. Participants felt that VC was appropriate pre-systemic anti-cancer treatment (SACT) review and whilst undergoing SACT, in addition to radiotherapy review consultations to assess side effects. On the other hand, some participants felt that “any difficult discussions should by default be in person in my opinion” (P60 Oncology SpR) and “diagnosis and change in treatment plans i.e. from treatments to best supportive care should be face to face were at all possible” P66 (Oncology CNS) . Subtheme: Accessibility Participants commented that the use of VC increases patients’ ability to access palliative care services and advice. Challenges with geographical and physical distance can be overcome in this way, and patients that live a distance from acute hospitals can benefit from specialist input regardless of where they live. In addition to this, as demonstrated above, the use of VC allows increased access to family members/significant others/carers to join in with consultations and discussions, even when they are at work or elsewhere. “They will be a fundamental part of service delivery. They widen access to our services. I think they have a role in all stages of the patient journey, however they can be inappropriate for individuals at all stages too. Decisions regarding their use must be individualised” (P45 PC SpR) . Subtheme: Infection Prevention In terms of the logistical impact of infection control, as described in the literature review, the role the coronavirus pandemic has played in the requirement and implementation of the use of VC in healthcare as a mode to review patients can not be underestimated in allowing shielding patients to be reviewed at a time of significant anxiety to themselves and their families. “It has been beneficial to reduce footfall in the cancer centre and to protect patients during the peaks of the pandemic.” (P16 Oncology OT) . Subtheme: Clinical impact Some participants reported difficulty in arranging prescriptions for patients using VC. N = 23, (27.3%) had experienced challenges with arranging prescriptions, and of these respondents ( n = 74) were doctors, specialist nurses and pharmacists- therefore likely prescribers. One participant described the inability “to issue prescriptions there and then is very frustrating and time consuming” (P67 Palliative care registrar) and the need to improve accessibility to electronic prescription services. Several participants described their experience and concerns in missing clinical signs using VC modalities. Some participants felt that they were unable to grasp how well the patient was on a screen, impacting on the holistic clinical picture and decision making. N = 56 (65%) of respondents found the inability to examine patients via VC quite challening or very challenging, and of the cohort of doctors surveyed ( n = 51), n = 37 (72.5%) found the inability to examine patients challenging to some degree. Lack of examination also exacerbated patient anxiety and increased need for supportive calls and reassurance. “Virtual appointments result in missing important clinical signs or changes in patient condition. Need to see these patients face to face to get a good sense of how they are doing.” (P5 Oncology consultant) . Subtheme: Environment Some participants felt that the impact of their working environment negatively affected their experience of VC. This was related to challenges with privacy, lack of physical dedicated space to undertake a VC and having multiple staff members in one room whilst trying to consult with a patient. Others described challenges with background noise in an open plan office space and interruptions. “Consultant hospice colleagues lurking off camera in consultation and then popping out halfway through.” (P27 PC Consultant) . An occupational therapist discussed the benefits of VC in having the ability to see the patients home and what was needed without having to physically assess, and that this enabled quicker processing of equipment. “Able to assess home environment to select appropriate equipment, reduced time to wait for equipment.” (P4 PC OT) . Subtheme: Time 45 of the participants (52.3%) felt that the ability to review patients quicker and with less notice was a slightly or very important factor to consider when undertaking VC. Partcipants felt that consultations and meetings could be arranged with shorter notice, including less need to book hospital transport or send out letters, which increased flexibility for the patient. “It is more efficient in contacting people sooner and saves travelling times and costs” (P44 PC CNS) . “Phone at a time more convenient to patient for them” (P55 PC Social worker) . Some participants felt that clinics were more efficient and quicker, with less delays where others felt that VC was more time consuming than telephone consultations due to the need to set up IT and delays related to technology. “Sessions with patients were shorter and more time efficient therefore long clinics were completed quicker”(P26 Oncology SALT) . Subtheme: Technology Participants were asked about their experience relating to the availability of appropriate access to equipment required to conduct VCs. N = 52 (59.8%) expressed that they did have access to appropriate equipment. Others felt that lack or insufficient equipment was a barrier in their experience of consulting virtually. This was related to desktop space, computer/laptop unavailability, headsets (with headphone and microphone), and several MDT members congregating in one space attempting to access a single desktop. In addition to this n = 39 (45.3%) of respondents experienced technical issues with the consulting programme and n = 39 (44.8%) had issues with poor internet connection. “However, if experiencing a bad internet connection, it can feel frustrating and quite a remote relationship” (P21 PC Consultant) . Participants explained that rural areas had difficulty with poor internet connectivity and felt that further work needed to be undertaken to improve internet connections prior to undertaking or wholly relying on VC. Subtheme: Travel One of the significant benefits participants discussed in their responses was the benefit of VC in reducing the need for travel. This was applicable to the HCP’s themselves and their patients/proxy. Over half, n = 48 (55.1%) of respondents considered the reduced travel time for patients and HCP to be a slightly or very important benefit of VC. For HCPs working in the community, VC meant that staff spent less time travelling around, and one oncology consultant felt that because patients were not travelling, they were able to accommodate more patients in one clinic. It was felt that if the message from the consultation was clear and uncomplicated, travelling long distances can be avoided, in addition to avoiding the burden of parking and waiting. “Virtual consultations can be useful in order to avoid travelling especially when the message is straightforward, such as a favourable follow-up scan” (P53 Oncology Consultant) . Table in appendix 4 summarises the assessments undertaken via VC by the various HCP roles. When asked about the role of VC during the delivery of varying stages of patient management, a significant proportion of the participants felt that VC had a role in patients that required routine follow up, if the patient was well and stable, and if they had already met the patient and had an established relationship with them. Participants felt that VC was appropriate pre-systemic anti-cancer treatment (SACT) review and whilst undergoing SACT, in addition to radiotherapy review consultations to assess side effects. On the other hand, some participants felt that “any difficult discussions should by default be in person in my opinion” (P60 Oncology SpR) and “diagnosis and change in treatment plans i.e. from treatments to best supportive care should be face to face were at all possible” P66 (Oncology CNS) . Participants commented that the use of VC increases patients’ ability to access palliative care services and advice. Challenges with geographical and physical distance can be overcome in this way, and patients that live a distance from acute hospitals can benefit from specialist input regardless of where they live. In addition to this, as demonstrated above, the use of VC allows increased access to family members/significant others/carers to join in with consultations and discussions, even when they are at work or elsewhere. “They will be a fundamental part of service delivery. They widen access to our services. I think they have a role in all stages of the patient journey, however they can be inappropriate for individuals at all stages too. Decisions regarding their use must be individualised” (P45 PC SpR) . In terms of the logistical impact of infection control, as described in the literature review, the role the coronavirus pandemic has played in the requirement and implementation of the use of VC in healthcare as a mode to review patients can not be underestimated in allowing shielding patients to be reviewed at a time of significant anxiety to themselves and their families. “It has been beneficial to reduce footfall in the cancer centre and to protect patients during the peaks of the pandemic.” (P16 Oncology OT) . Some participants reported difficulty in arranging prescriptions for patients using VC. N = 23, (27.3%) had experienced challenges with arranging prescriptions, and of these respondents ( n = 74) were doctors, specialist nurses and pharmacists- therefore likely prescribers. One participant described the inability “to issue prescriptions there and then is very frustrating and time consuming” (P67 Palliative care registrar) and the need to improve accessibility to electronic prescription services. Several participants described their experience and concerns in missing clinical signs using VC modalities. Some participants felt that they were unable to grasp how well the patient was on a screen, impacting on the holistic clinical picture and decision making. N = 56 (65%) of respondents found the inability to examine patients via VC quite challening or very challenging, and of the cohort of doctors surveyed ( n = 51), n = 37 (72.5%) found the inability to examine patients challenging to some degree. Lack of examination also exacerbated patient anxiety and increased need for supportive calls and reassurance. “Virtual appointments result in missing important clinical signs or changes in patient condition. Need to see these patients face to face to get a good sense of how they are doing.” (P5 Oncology consultant) . Some participants felt that the impact of their working environment negatively affected their experience of VC. This was related to challenges with privacy, lack of physical dedicated space to undertake a VC and having multiple staff members in one room whilst trying to consult with a patient. Others described challenges with background noise in an open plan office space and interruptions. “Consultant hospice colleagues lurking off camera in consultation and then popping out halfway through.” (P27 PC Consultant) . An occupational therapist discussed the benefits of VC in having the ability to see the patients home and what was needed without having to physically assess, and that this enabled quicker processing of equipment. “Able to assess home environment to select appropriate equipment, reduced time to wait for equipment.” (P4 PC OT) . 45 of the participants (52.3%) felt that the ability to review patients quicker and with less notice was a slightly or very important factor to consider when undertaking VC. Partcipants felt that consultations and meetings could be arranged with shorter notice, including less need to book hospital transport or send out letters, which increased flexibility for the patient. “It is more efficient in contacting people sooner and saves travelling times and costs” (P44 PC CNS) . “Phone at a time more convenient to patient for them” (P55 PC Social worker) . Some participants felt that clinics were more efficient and quicker, with less delays where others felt that VC was more time consuming than telephone consultations due to the need to set up IT and delays related to technology. “Sessions with patients were shorter and more time efficient therefore long clinics were completed quicker”(P26 Oncology SALT) . Participants were asked about their experience relating to the availability of appropriate access to equipment required to conduct VCs. N = 52 (59.8%) expressed that they did have access to appropriate equipment. Others felt that lack or insufficient equipment was a barrier in their experience of consulting virtually. This was related to desktop space, computer/laptop unavailability, headsets (with headphone and microphone), and several MDT members congregating in one space attempting to access a single desktop. In addition to this n = 39 (45.3%) of respondents experienced technical issues with the consulting programme and n = 39 (44.8%) had issues with poor internet connection. “However, if experiencing a bad internet connection, it can feel frustrating and quite a remote relationship” (P21 PC Consultant) . Participants explained that rural areas had difficulty with poor internet connectivity and felt that further work needed to be undertaken to improve internet connections prior to undertaking or wholly relying on VC. One of the significant benefits participants discussed in their responses was the benefit of VC in reducing the need for travel. This was applicable to the HCP’s themselves and their patients/proxy. Over half, n = 48 (55.1%) of respondents considered the reduced travel time for patients and HCP to be a slightly or very important benefit of VC. For HCPs working in the community, VC meant that staff spent less time travelling around, and one oncology consultant felt that because patients were not travelling, they were able to accommodate more patients in one clinic. It was felt that if the message from the consultation was clear and uncomplicated, travelling long distances can be avoided, in addition to avoiding the burden of parking and waiting. “Virtual consultations can be useful in order to avoid travelling especially when the message is straightforward, such as a favourable follow-up scan” (P53 Oncology Consultant) . Most participants felt strongly that VC had a role in the future management of patients in oncology and palliative care. 53.5% felt that VCs could replace face to face consultations in approximately half of consultations. Overwhelmingly, 71.2% of participants agreed or strongly agreed that a mixture of face-to-face consultations with VC is the way forward. Murphy et al. suggested further work was required to gauge what type of ‘typical’ consultation was best suited for the virtual modality, compared to others . This survey was clear in demonstrating that from the HCPs point of view, there was a strong preference for consultations that included follow-up assessments and treatment plans to be offered virtually, as opposed to predominantly face to face within oncology and palliative care. Initial assessments, discussions around change in condition and treatment, and advance care planning conversations were felt to be more appropriate face to face, and difficult conversations were perceived as inappropriate and extremely challenging using VC. There would be significant benefit in reviewing patient experience, and whether feelings were unanimous, as this could have significant implications on how patient follow up is delivered. Participants also expressed the role VC may have in benefiting HCPs in future, e.g. during times of staff shortages, potential ongoing issues with infection control, and reduced necessity to travel including to and from various clinical sites. It was felt that the environmental impact in reducing the impact of travelling was significant and a big factor for HCPs. Also, eliminating waiting times for patients in busy hospital clinic areas, was seen as a benefit, as well as reducing anxiety by being in the safety of one’s own home, during a virtual consult. Our research also highlighted discrepancies. For instance, in the subtheme “patient context,” the quantitative data show that most respondents found VC beneficial when patients are unwell, presumably because a car journey to a hospital may be very burdensome for a patient and added to this the wait in a busy and noisy outpatient waiting room. However, the qualitative respondent data presented also highlights that assessing how unwell a person is, can be very difficult when using VC. This contrast may just provide a magnifying glass on some of the wider pros and cons of this newer form of consultation. Respondents did not mention or talk about the potential opportunity of recording conversations, and this is not routine practice from the HCPs point of view, but may be something that patients are already doing, whether openly or covertly. Some felt that there needed to be a greater awareness of technology inequality that exists, therefore VC may not be appropriate for all. The pandemic highlighted the issues around a “digital divide” and the inequality around access to technology, which includes equipment and internet connectivity . This as an opportunity for policy makers to acknowledge and address changes to the inequity in access to technology. They also suggested telehealth hubs, potentially in rural areas, therefore centralising technology more to ensure access to all. There was a strong belief that the future of VC lies within a hybrid approach of using a mixture of modalities, as was seen in the systematic review by Murphy et al. . Similarly, those that were interested in continuing with telehealth reported the blended approach (face-to-face consultations in addition to virtual) as the best solution . The importance of individual choice and patient empowerment came out as a key factor for HCPs within our survey in allowing the patient themselves to choose what mode of consultation was best for them. This will require detailed triaging to ensure the consultation modality meets the clinical requirements necessary from the interaction. Recent work undertaken by Greenhalgh et al., involves virtual consultations that have been recorded, reflected upon and analysed for further training to develop the skills required when interacting using VC modalities . Study limitations One of the main limitations to this research study is recruitment bias. As the question was related to the experience of HCP of VC within oncology and palliative care, and the survey was distributed on social media, one can assume that the participants were likely well attuned to information technology and relatively confident in their IT skills. Within the limitations it is important to be aware of the researcher’s role within the research itself which may introduce an element of bias. As the researcher was a palliative medicine trainee with experience in VC, one could argue that this may have added an element of bias to the questions asked. On the other hand, the questions were based on current literature review. Recommendations The findings from this study supplement the existing research with regard to the role of telemedicine and VC in delivering health care, especially since the Covid-19 pandemic. It draws attention to the limitations of VC within these fields and the requirement of further exploration of patient’s experience on certain aspects of the consultation. As part of ongoing work we hope to evaluate patient and carers views on VC via a large survey. The findings have been shared in workforce planning meetings in our local Health boards and NHS Trusts, to consider how the implementation of VC within some aspects of clinical consultations can impact on consultation efficiency, workforce, and cost. Organisational meetings now have virtual meetings and consultations as standing agenda items, invariably containing a mixed bag of positive, indifferent, and negative feedback about various aspects of this new practice. The findings from this research can further evidence discussions and impact on care delivery of oncological and palliative care patients in future. One of the main limitations to this research study is recruitment bias. As the question was related to the experience of HCP of VC within oncology and palliative care, and the survey was distributed on social media, one can assume that the participants were likely well attuned to information technology and relatively confident in their IT skills. Within the limitations it is important to be aware of the researcher’s role within the research itself which may introduce an element of bias. As the researcher was a palliative medicine trainee with experience in VC, one could argue that this may have added an element of bias to the questions asked. On the other hand, the questions were based on current literature review. The findings from this study supplement the existing research with regard to the role of telemedicine and VC in delivering health care, especially since the Covid-19 pandemic. It draws attention to the limitations of VC within these fields and the requirement of further exploration of patient’s experience on certain aspects of the consultation. As part of ongoing work we hope to evaluate patient and carers views on VC via a large survey. The findings have been shared in workforce planning meetings in our local Health boards and NHS Trusts, to consider how the implementation of VC within some aspects of clinical consultations can impact on consultation efficiency, workforce, and cost. Organisational meetings now have virtual meetings and consultations as standing agenda items, invariably containing a mixed bag of positive, indifferent, and negative feedback about various aspects of this new practice. The findings from this research can further evidence discussions and impact on care delivery of oncological and palliative care patients in future. Below is the link to the electronic supplementary material. Supplementary Material 1 : 1. Inclusion and exclusion criteria. 2. The survey. 3. Table of results (demographic and quantitative data). 4. Table summary of types of assessments undertaken by varying HCP’s using VC. |
Treatment of Stage I-III Hip Joint Tuberculosis With Open Surgical Debridement and Hip Spica in Children: A Retrospective Study | 1b33d11a-78b8-4f2e-a75f-7fcfbba1a2a5 | 9470041 | Debridement[mh] | This retrospective study was reviewed and approved by the Human and Ethics Committee for Medical Research at Sichuan University in accordance with the Declaration of Helsinki [No. 2022(745)]. Informed consent was obtained from all patients for being included in the study and written informed consent regarding publishing their data and photographs was obtained from parents of all pediatric participants. From January 2010 to January 2016, totally 91 young patients with HTB of stage I to III received surgical debridement treatment in our department, West China Hospital. Four patients were excluded because of the loss of follow-up, and finally 87 patients were enrolled in the study. Routine chest and spine physical examination, chest x-ray and 3 sputum cultures for mycobacterium were performed for any patient suspected with HTB to search for the potential pulmonary TB and spine TB. If a pulmonary TB or severe spine TB was determined, another treatment strategy would be considered before the debridement of HTB, and these cases were not included in this study. The preoperative diagnosis was mainly a clinical diagnosis based on the clinical presentation, a history of TB or contact with TB, inflammatory markers, TB antibody test, interferon-γ release assay, and the radiographs. All patients who were surgically treated in our department routinely underwent smear and culture of the infection sample, acid-fast stain, and TB polymerase chain reaction for HTB diagnosis. According to the classification by Tuli, , HTB was divided into 4 stages, including synovitis stage, stage of early arthritis, stage of advanced arthritis, and stage of advanced arthritis with subluxation/dislocation (Table ). Patients with stage IV HTB were not included because dislocation or subluxation already existed before treatment in this stage and a single debridement procedure might be not enough for the treatment of these dislocated hip. All patients were less than 14 years of age and received open surgical debridement after 4 weeks of conservative treatment, including rest, protected weight-bearing, and antitubercular chemotherapy (4 drugs: rifampicin, isoniazid, pyrazinamide, and ethambutol). Patients with obvious relief in symptoms and decrease in laboratory tests of erythrocyte sedimentation rate and C-reactive protein were considered for continuous conservative treatment rather than surgical treatment and were not included in this study. All diagnoses of the enrolled patients were confirmed by polymerase chain reaction testing or pathologic diagnosis of biopsy specimens. Surgery was performed by 2 senior pediatric orthopaedic surgeons well trained in this technique under general anesthesia. An anterolateral approach was made to access the hip joint with patients in supine position. To achieve a thorough debridement, we performed a T-shaped capsulotomy from anterior and then dislocated the femoral head from the acetabulum. The debridement included the removal of infectious tissues, part of the hyperplastic synovium, necrotic tissues, and the sequestrum. The capsule was not closed, with only 2 sutures of the lateral articular capsule, and no drainage was used. The immobilization time of 4 or 6 weeks is not decided by the surgeon’s preference or by a random method but based on the first time when the patients came back for follow-up. We usually recommended patients to come back at the time 4 to 6 weeks after surgery, according to their personal arrangement. A flexible follow-up time made it easier for those parents with full work schedule or those who lived far away from our hospital. After operation, patients in group A (39 patients) received postoperative hip spica for 4 weeks and group B (48 patients) received 6 weeks of hip spica to provide immobilization and postoperative hip instability. When spica was removed, patients were recommended partial weight-bearing for 7 days and then followed the same hip joint exercises program. Postoperative antitubercular chemotherapy lasted for 1 year, consisting of 4 drugs protocol for 6 months (4 drugs: rifampicin, isoniazid, pyrazinamide, and ethambutol) and 2 drugs protocol for 6 months (2 drugs: rifampicin and isoniazid). All enrolled patients were followed up for at least 5 years, with a mean follow-up of 5.8±1.6 years (range from 5 to 8 y). Patients were followed up about every 2 weeks for 2 months and then every 2 months during the first year, then every 2 years after the second year. Data were collected from the hospital records, and the final clinical outcome was evaluated by 2 independent pediatric orthopaedic. During follow-up, patients were mainly assessed by postoperative radiographs (Fig. ), modified Harris hip score (MHHS), complication of hip dislocation or subluxation, and wound healing problems. SPSS 20 was used for data analysis. Continuous data were reported using the mean ± SD and range. Categorical data are reported as numbers and percentages. Comparisons of variables between baseline and the endpoint were analyzed using paired t tests when the distribution was normal; comparisons of continuous data between these 2 groups were analyzed using independent-sample t test. Otherwise, the χ 2 test was used. A P -value of <0.05 was considered significant.
In total, 87 patients with HTB underwent open surgical debridement were retrospectively evaluated, including 34 females and 53 males, with a mean age of 7.2±2.8 years old (range from 2 to 14 y). The most common chief compliant was pain in hip, which presented on 85 patients, others including the abnormal gait (46 patients), knee pain (4 patients), and low grade fever (8 patients). The average length of time from symptom onset to clinical diagnosis was 5.6±3.7 months (from 3 wk to 13 mo). Patients with HTB usually shared some classical characteristics in the laboratory tests, including a high erythrocyte sedimentation rate and an increased level of C-reactive protein. All these baseline data showed no significant difference between these 2 groups. The detailed data are presented in Table . In group A, the mean MHHS improved from 52.1±14.7 before surgery to 87.8±8.3 at the final follow-up ( P <0.000). In group B, there was also a significant difference between the preoperative MHHS (52.7±9.4) and the final MHHS (88.6±6.5) ( P <0.000), whereas there was no significant difference found in the final MHHS between these 2 groups ( P =0.593). The only difference between these 2 groups was the early functional outcome after 4 weeks of exercise assessed by using MHHS. The early MHHS of group A was 79.2±8.5, significantly higher than that of group B (75.5±7.5, P =0.032). According to the modified Moon criteria (Table ) for outcome assessment, there were 69.0% excellent (60/87), 14.9% good (13/87), 9.2% fair (8/87), and 6.9% poor (6/87) results in this group of patients. There were 9 patients involved with mild wound healing delay, including 3 in group A and 6 in group B ( P =0.705), and successfully treated by dressing changes. During the 5 years follow-up, 4 patients developed pathologic hip subluxation, including 3 in group A and 1 in group B, which also showed no significant difference ( P =0.467).
Bone TB in childhood is still challenging in early diagnosis and treatment. The most common presentation of bone TB is spondylodiscitis (or Pott disease), and hip joint is one of the most frequent locations of tubercular arthritis in children. , Prognosis of HTB depends on the disease extension. If diagnosed and treated at an early stage, ~90% to 95% of patients would achieve healing with near normal function. , However, it is still difficult in the early diagnosis of TB, despite the progress of detecting technology. Kabore et al reported a nearly 1-year delay of diagnosis after clinical onset due to the nonspecific symptoms and radiologic characteristics of bone TB at early time. In a retrospective study from China, Chen et al also reported a prolonged time span of 13.16 months on the diagnosis of bone TB, which probably resulted from the atypical clinical characteristics, shortage of effective diagnosis measures and social-economic differences. In their study, the most common onset symptom was local pain (83.18%), followed by local swelling (11.5%) and impairment of function (3.5%). These atypical onset symptoms usually contribute little to the suspicion of bone TB. For childhood TB, finding direct evidence of organism in body fluids or tissues is also more difficult than adult. Only fewer than 40% of childhood TB can be microbiologically confirmed, and more than 60% childhood TB cases are diagnosed by the analysis of signs and symptoms, radiography, tests of infection, and epidemiology. , Because of these difficulties in early diagnosis of childhood HTB, outcomes of pediatric HTB are usually poor. Thus, more available, sensitive, and effective testing methods are still needed. In our study, we observed a 5.6 months delay in the diagnoses of HTB, which was shorter than those in the literature mentioned above. It might be related to the more aggressive operative indications we carried out, which could make it easier to get biopsy specimens for diagnoses. Another reason was that high TB burden in our areas made surgeons here more easily to suspect and early test for HTB. Antitubercular chemotherapy is still the highly effective and primary treatment for HTB, whereas surgical debridement or synovectomy also plays an important role to avoid further damage of joint when there is no relief after conservative treatment. A delayed intervention may lead to more severe damage of bone and cartilage, subluxation, or ankylosis. , Both open surgery and minimal invasive arthroscopy have been reported as the debridement procedures for joint TB in the literature. , , Moon et al reported open surgical debridement or synovectomy combined with postoperation immobilization to treat pediatric HTB, with a good result of 73.1%. Tiwari et al described hip arthroscopy as an effective and safe minimally invasive procedure in the treatment of pediatric HTB, which seemed to have more advantages in earlier return to activity, less invasive and less perioperative morbidity when compared with the open procedure. They also described the involvement of the labrum under arthroscopy might indicate a poor prognosis. The other difference between open surgery and arthroscopy procedure was the use of postoperative spica. After open surgical debridement, some authors recommended a hip spica for 4 to 6 weeks or even several months, which usually was absent for the arthroscopy procedure. – Pathologic dislocation or subluxation is a less common complication of hip infective arthritis, which could lead to severe joint dysfunction and difficulty in management. , Campbell and Hoffman reported an incidence of pathologic hip dislocation due to HTB as high as nearly 17%. Hip spica was a common procedure used in the treatment or prevention of hip dislocation in young patients, especially for these unstable hip joint after surgery. For pediatric HTB, a hip spica for 4 to 6 weeks was recommended as a conservative choice for young children to reduce pain when traction cannot be conducted. Moon et al also reported immobilization with cast for 1 to 3 months after operation in the treatment of pediatric HTB and achieved good clinical outcome. However, long-term immobilization may bring a poor hip joint function. Saraf and Tuli believed that mobilization exercise could bring more satisfying results when compared with the immobilization by hip spica in pediatric HTB. DiFazio et al found that long time cast was one of the predictors of skin complications in using of hip spica. Pisecky et al also believed shorter protocols for spica cast immobilization after hip reconstruction leaded to less complications. Emara et al described a 4-week immobilization with less complications and higher patient comfort when compared with the longer protocol in a prospective clinical trial. Literature showed various attempt to establish the duration of hip spica cast, while there is still no standard for the duration of postoperative hip spica in pediatric HTB treatment. In our study, the rate of subluxation in 4 weeks spica group (7.7%) was higher than that in the 6 weeks spica group (2.1%), with no significant difference ( P =0.467). It might mean that prolonged hip spica showed no benefit on decreasing the risk of pathologic subluxation after debridement treatment, whereas pathologic subluxation could be a result of multi factors, such as parents’ compliance, intensity of daily activity during rehabilitation period, and others. Thus, we could not draw a firm conclusion based on the limited available data. We still believed that time of spica immobilization should be individualized and based on the surgeon’s evaluation and the patient’s condition. Because a hip subluxation or dislocation due to HTB could be a catastrophic outcome in children. Outcome of delayed pediatric HTB is usually poor because of high risks of complications. , , Moon et al reported a 9.3% poor outcome in their retrospective study and nearly 71% minor morphologic abnormalities in the joints. Agarwal et al also reported a poor outcome of 18.5% in their retrospective study. According to the modified Moon criteria, poor outcomes was observed only in 6 patients (6.9%) in our study. One main reason for the low incidence of poor outcome in our study was the exclusion of stage IV HTB, which usually resulted in a damaged joint and poor outcome. We also used the MHHS to evaluate the functional outcome of patients preoperatively and postoperatively. There were significant improvements of MHHS after surgical treatment in both groups, which meant open surgical debridement combined with ATT was an effective treatment for HTB patients without relief after single conservative treatment. No significant difference was found in the postoperative MHHS between 4 weeks spica group and 6 weeks spica group, which might mean the duration of postoperative hip spica immobilization did not affect the final functional outcome. However, there was an obvious delay in early functional recovery when the immobilization time was increased from 4 to 6 weeks. We reported 87 cases of HTB, which was a huge number in a single institution. There were several possible reasons. China has the second highest number of TB patients in the world, and some limited available epidemiologic data indicated an increasing trend in extrapulmonary TB in China. In a single center study from China, Pang et al reported 19,279 hospitalized TB patients, with 33.4% (6433 cases) extrapulmonary TB between 2008 and 2017. In their study, the most common extrapulmonary TB was skeletal TB (44.1%). In another epidemiological study of extrapulmonary TB from China between 2015 and 2018, 204 cases of skeletal TB were reported. Another reason for this finding was that Western China were TB high-burden areas. The fifth national TB epidemiological survey in China indicated that Western China had the highest prevalence of TB when compared with other regions. Our hospital was one of the largest medical centers in Western China receiving patients from many other provinces. These might be the reasons why we had so many HTB cases and even more than the other aera of China. There were still some limitations in our study. Two main functional assessment systems for pediatric HTB have been reported in literature: the MHHS and the Moon criteria, which were both used in our study. Functional assessment of HTB was difficult in children and there was still no consensus on the scoring system. We also do not have much experience on it because few studies focused on this. Thus, both assessment systems above were tried in our study. In our study, the mean postoperative Harris Hip Score was about 80s. This finding should be combined with functional results assessed by another assessment system. According to the modified Moon criteria, there were 69.0% excellent (60/87) results, which meant a nearly normal hip joint function and a high score. The remaining 31% patients had unsatisfactory functional outcomes in various degrees with lower scores. This could be a possible reason for the mean scores of the 80s. Growth discrepancy is less frequent in patients with early-stage HTB or stable hip joint. Sometimes, abnormal gait and false unequal leg length could be observed, which mainly caused by the pelvic obliquity, the poor function of hip joint, or the severe destruction. A true leg length discrepancy is not so common or obvious in early time, and we did not focus on this problem. Other limitations of our study included small sample size and nonrandomized retrospective study design. In addition, intensity of daily activity in different patients during rehabilitation period were also not considered, which might have influences on the early hip stability and function. Therefore, we could not draw a firm conclusion, and further research was needed.
Open surgical debridement combined with antitubercular chemotherapy is an option for pediatric HTB patients with no relief after conservative treatment. Prolonged spica cast immobilization may not reduce the risk of postoperative dislocation or subluxation but could lead to a delay in early functional recovery. Time of spica immobilization should be individualized and based on the surgeon’s evaluation and the patient’s condition.
|
Health literacy in parents of children with Hirschsprung disease: a novel study | eb3b22bb-ad9e-4005-b0f0-e99a520ab1ec | 11618141 | Health Literacy[mh] | Hirschsprung disease (HD) affects one in 5000 newborns with a male predominance of four to one. The condition involves the lack of ganglion cells in the myenteric and submucosal plexuses along a variable length of the distal gut, causing functional bowel obstruction . Up to 30% of patients with HD have other comorbidities, Down syndrome being the most common involving around 10% of cases . Although primary surgery for HD is generally successful, post-operative bowel dysfunction is common to varying degrees long term . Bowel management in children with HD can be complex, involving medication, bowel evacuation routines, and special diets that require close parental control . Parents coordinate care, communicate with daycare and schools, and are central in treatment decisions. They often need to cope with mental, physical, and social stress related to their child’s condition, which can negatively impact the daily life of both the child and the rest of the family . Health literacy (HL) is the ability to access, comprehend, evaluate, and apply health-related information . Enhanced HL is considered fundamental for future healthcare, enabling digitalization, home-based care, shared decision-making, and equity . Parental HL encompasses a range of skills and competencies that allow parents to effectively navigate the healthcare system, understand medical instructions, communicate with healthcare providers (HCP), and make informed choices about their child’s health . A recent systematic review on the relationship between parental HL and health outcomes for children with chronic diseases found a clear link between parental HL, health behavior and child health outcomes . HL in parents of children with HD has not previously been studied. The aim of this study was, therefore, to explore parental HL in the context of HD and to investigate the possible effects of demographic factors and self-efficacy on parental HL.
Study design and recruitment A cross-sectional study was conducted with parents of children under 16 years who had undergone HD surgery at Oslo University Hospital. The hospital is a tertiary referral center for pediatric surgery and treats around 80% of the country’s HD patients. The department participates in the European reference network ERNICA and offers multidisciplinary follow-up, including psychosocial support to families. We identified 137 patients who underwent HD surgery between 2007 and 2024 through patient records. Two patients had died and five had moved abroad, leaving 130 eligible participants. Primary caregivers able to answer the questionnaire in Norwegian were invited via mailed invitations or at the outpatient clinic by an independent person from October 2023 to May 2024. Participants could complete the form online or on paper and non-responders received a reminder after three weeks. Measures Patient and parent characteristics Clinical data such as diagnosis, surgeries, comorbidities and age at diagnosis were collected from records. Caregiver information, such as living situation, education, home language and work situation was collected via questionnaire. Hirschsprung disease study-specific questionnaire A study-specific questionnaire on general knowledge about HD included 6 statements on basic facts and misconceptions (Fig. ). This questionnaire was designed to get a general impression of the participants’ disease-specific knowledge about HD. We tested the questions with parents and colleagues and revised them locally. Parents used a 5-point Likert scale to indicate agreement. For analysis, “strongly agree” and “agree” were grouped as “agree”, and “strongly disagree” and “disagree” as “disagree”. A comment section was provided for additional remarks. The health literacy questionnaire – parent (HLQ-p) The HLQ is a generic, multidimensional instrument designed to assess an individual’s HL skills and abilities . We used the parent-specific version of the HLQ (HLQ-p), which is a validated tool used to assess HL levels among parents in relation to the healthcare of their children . It evaluates the ability of parents to access, understand and communicate health information related to their child’s healthcare. The questionnaire consists of nine domains relating to parental HL; (1) feeling that healthcare providers understand and support their child’s situation, (2) having sufficient information to manage their child’s health, (3) actively managing their child’s health, (4) social support for health, (5) appraisal of health information, (6) ability to actively engage with healthcare providers, (7) navigating the healthcare system, (8) ability to find good health information, and (9) understanding health information well enough to know what to do. Responses for domains 1 to 5 are measured on a 4-point Likert scale ranging from “strongly disagree” to “strongly agree”, while domains 6 to 9 utilize a 5-point Likert scale assessing capability/difficulty ranging from “can’t do/always difficult” to “always easy”. A total score is not calculated for the nine HLQ-p scales. Instead, mean scale scores are calculated and interpreted separately (10). Scores > 2 on scales 1–5 signal a change from “disagree” to “agree”, and > 3 on scales 6–9 suggest a shift from “sometimes difficult” to “usually easy”, reflecting changes in HL without setting a fixed threshold for limitation. The HLQ-p has been validated in Norwegian with satisfactory results (11). The reliability of the HLQ-p was assessed using Cronbach’s alpha (Cronbach’s α 0.8–0.9). The electronic health literacy scale (eHEALS) The eHEALS assessed the participants’ perceived level of electronic HL (eHL), regarding finding, evaluating and utilizing online healthcare information . The 8-question survey uses a 5-point Likert scale, with a higher score suggesting a higher level of eHL . The tool has shown robust construct validity and reliability across various settings . Cronbach’s alpha was 0.9. The general self-efficacy scale (GSES) The GSES is a psychological assessment tool measuring the participants’ belief in their ability to handle challenges and accomplish goals . Comprising 10 items, scores range from 10 to 40, with higher scores indicating higher self-perceived self-efficacy. For this study, scores were normalized to a 1 to 4 scale. The GSES has shown validity and reliability in studies on patients with different conditions . Cronbach’s alpha was 0.9. Statistical analysis Data analyses were performed using Stata 18.0. Initially, general characteristics were summarized using means and standard deviations. Independent t -test was used to compare differences between parent and child factors against HLQ-p domains. To assess the connections between HLQ-p, eHEALS, and various parental and child factors, bivariate correlation (Pearson’s R) was utilized. Next, a hierarchical linear multiple regression analysis in three steps was performed using the enter method; Step 1 including age, education and language; Step 2 living arrangements; and Step 3 involved adjustment for GSES score. The selection of variables included in the regression models was guided by the initial analyses. The associations are presented as standardized beta coefficients. Adjusted R 2 explained variation in the associations. A cluster variable for paired parents (88 pairs) ensured valid regression analysis, however, adjustments for clustering revealed no significant differences, therefore all parents ( n = 132) were included. Significance was set at p < 0.05. The online form ensured no missing data for the HLQ-p, GSES and HD-questionnaire by making responses mandatory. We achieved a 98% completion rate for the optional eHEALS, which was deemed negligible for the analysis. Ethics The project was ethically approved by the Regional Committee for Medical Ethics (REK; 402,216) and the Hospital’s Data Protection Officer (22/03367). All parents gave written consent. The children received age-appropriate information about the study.
A cross-sectional study was conducted with parents of children under 16 years who had undergone HD surgery at Oslo University Hospital. The hospital is a tertiary referral center for pediatric surgery and treats around 80% of the country’s HD patients. The department participates in the European reference network ERNICA and offers multidisciplinary follow-up, including psychosocial support to families. We identified 137 patients who underwent HD surgery between 2007 and 2024 through patient records. Two patients had died and five had moved abroad, leaving 130 eligible participants. Primary caregivers able to answer the questionnaire in Norwegian were invited via mailed invitations or at the outpatient clinic by an independent person from October 2023 to May 2024. Participants could complete the form online or on paper and non-responders received a reminder after three weeks.
Patient and parent characteristics Clinical data such as diagnosis, surgeries, comorbidities and age at diagnosis were collected from records. Caregiver information, such as living situation, education, home language and work situation was collected via questionnaire. Hirschsprung disease study-specific questionnaire A study-specific questionnaire on general knowledge about HD included 6 statements on basic facts and misconceptions (Fig. ). This questionnaire was designed to get a general impression of the participants’ disease-specific knowledge about HD. We tested the questions with parents and colleagues and revised them locally. Parents used a 5-point Likert scale to indicate agreement. For analysis, “strongly agree” and “agree” were grouped as “agree”, and “strongly disagree” and “disagree” as “disagree”. A comment section was provided for additional remarks. The health literacy questionnaire – parent (HLQ-p) The HLQ is a generic, multidimensional instrument designed to assess an individual’s HL skills and abilities . We used the parent-specific version of the HLQ (HLQ-p), which is a validated tool used to assess HL levels among parents in relation to the healthcare of their children . It evaluates the ability of parents to access, understand and communicate health information related to their child’s healthcare. The questionnaire consists of nine domains relating to parental HL; (1) feeling that healthcare providers understand and support their child’s situation, (2) having sufficient information to manage their child’s health, (3) actively managing their child’s health, (4) social support for health, (5) appraisal of health information, (6) ability to actively engage with healthcare providers, (7) navigating the healthcare system, (8) ability to find good health information, and (9) understanding health information well enough to know what to do. Responses for domains 1 to 5 are measured on a 4-point Likert scale ranging from “strongly disagree” to “strongly agree”, while domains 6 to 9 utilize a 5-point Likert scale assessing capability/difficulty ranging from “can’t do/always difficult” to “always easy”. A total score is not calculated for the nine HLQ-p scales. Instead, mean scale scores are calculated and interpreted separately (10). Scores > 2 on scales 1–5 signal a change from “disagree” to “agree”, and > 3 on scales 6–9 suggest a shift from “sometimes difficult” to “usually easy”, reflecting changes in HL without setting a fixed threshold for limitation. The HLQ-p has been validated in Norwegian with satisfactory results (11). The reliability of the HLQ-p was assessed using Cronbach’s alpha (Cronbach’s α 0.8–0.9). The electronic health literacy scale (eHEALS) The eHEALS assessed the participants’ perceived level of electronic HL (eHL), regarding finding, evaluating and utilizing online healthcare information . The 8-question survey uses a 5-point Likert scale, with a higher score suggesting a higher level of eHL . The tool has shown robust construct validity and reliability across various settings . Cronbach’s alpha was 0.9. The general self-efficacy scale (GSES) The GSES is a psychological assessment tool measuring the participants’ belief in their ability to handle challenges and accomplish goals . Comprising 10 items, scores range from 10 to 40, with higher scores indicating higher self-perceived self-efficacy. For this study, scores were normalized to a 1 to 4 scale. The GSES has shown validity and reliability in studies on patients with different conditions . Cronbach’s alpha was 0.9.
Clinical data such as diagnosis, surgeries, comorbidities and age at diagnosis were collected from records. Caregiver information, such as living situation, education, home language and work situation was collected via questionnaire.
A study-specific questionnaire on general knowledge about HD included 6 statements on basic facts and misconceptions (Fig. ). This questionnaire was designed to get a general impression of the participants’ disease-specific knowledge about HD. We tested the questions with parents and colleagues and revised them locally. Parents used a 5-point Likert scale to indicate agreement. For analysis, “strongly agree” and “agree” were grouped as “agree”, and “strongly disagree” and “disagree” as “disagree”. A comment section was provided for additional remarks.
The HLQ is a generic, multidimensional instrument designed to assess an individual’s HL skills and abilities . We used the parent-specific version of the HLQ (HLQ-p), which is a validated tool used to assess HL levels among parents in relation to the healthcare of their children . It evaluates the ability of parents to access, understand and communicate health information related to their child’s healthcare. The questionnaire consists of nine domains relating to parental HL; (1) feeling that healthcare providers understand and support their child’s situation, (2) having sufficient information to manage their child’s health, (3) actively managing their child’s health, (4) social support for health, (5) appraisal of health information, (6) ability to actively engage with healthcare providers, (7) navigating the healthcare system, (8) ability to find good health information, and (9) understanding health information well enough to know what to do. Responses for domains 1 to 5 are measured on a 4-point Likert scale ranging from “strongly disagree” to “strongly agree”, while domains 6 to 9 utilize a 5-point Likert scale assessing capability/difficulty ranging from “can’t do/always difficult” to “always easy”. A total score is not calculated for the nine HLQ-p scales. Instead, mean scale scores are calculated and interpreted separately (10). Scores > 2 on scales 1–5 signal a change from “disagree” to “agree”, and > 3 on scales 6–9 suggest a shift from “sometimes difficult” to “usually easy”, reflecting changes in HL without setting a fixed threshold for limitation. The HLQ-p has been validated in Norwegian with satisfactory results (11). The reliability of the HLQ-p was assessed using Cronbach’s alpha (Cronbach’s α 0.8–0.9).
The eHEALS assessed the participants’ perceived level of electronic HL (eHL), regarding finding, evaluating and utilizing online healthcare information . The 8-question survey uses a 5-point Likert scale, with a higher score suggesting a higher level of eHL . The tool has shown robust construct validity and reliability across various settings . Cronbach’s alpha was 0.9.
The GSES is a psychological assessment tool measuring the participants’ belief in their ability to handle challenges and accomplish goals . Comprising 10 items, scores range from 10 to 40, with higher scores indicating higher self-perceived self-efficacy. For this study, scores were normalized to a 1 to 4 scale. The GSES has shown validity and reliability in studies on patients with different conditions . Cronbach’s alpha was 0.9.
Data analyses were performed using Stata 18.0. Initially, general characteristics were summarized using means and standard deviations. Independent t -test was used to compare differences between parent and child factors against HLQ-p domains. To assess the connections between HLQ-p, eHEALS, and various parental and child factors, bivariate correlation (Pearson’s R) was utilized. Next, a hierarchical linear multiple regression analysis in three steps was performed using the enter method; Step 1 including age, education and language; Step 2 living arrangements; and Step 3 involved adjustment for GSES score. The selection of variables included in the regression models was guided by the initial analyses. The associations are presented as standardized beta coefficients. Adjusted R 2 explained variation in the associations. A cluster variable for paired parents (88 pairs) ensured valid regression analysis, however, adjustments for clustering revealed no significant differences, therefore all parents ( n = 132) were included. Significance was set at p < 0.05. The online form ensured no missing data for the HLQ-p, GSES and HD-questionnaire by making responses mandatory. We achieved a 98% completion rate for the optional eHEALS, which was deemed negligible for the analysis.
The project was ethically approved by the Regional Committee for Medical Ethics (REK; 402,216) and the Hospital’s Data Protection Officer (22/03367). All parents gave written consent. The children received age-appropriate information about the study.
Cohort characteristics Parents of 91/130 (70%) children completed the questionnaires. The median age of the children was 8 (0–15) years (Table ). We received 132 parent responses, of which 79 (60%) were from mothers. Responses included 44 cases where both parents participated. In addition, one parent responded for 3 siblings, and another parent responded for 2 siblings. For the remaining 42 children, one parent responded to the child, providing a total number of 132 responses. Mean parental age was 39.8 years (SD 6.8), with no significant age difference between fathers and mothers (41.1 versus 38.9 years, p = 0.7). Most parents lived with the other parent of the child (79.5%), had higher education (52%), and worked full time (79%). 36% of the parents spoke another language or combined another language with Norwegian at home. Of the children, 75% were male, 74% had short segment HD and 28% had additional comorbidity, Down syndrome being the most common (13%). General knowledge about Hirschsprung disease The results from the HD study-specific questionnaire indicated that most parents had a good general knowledge about the congenital nature and rarity of HD (Fig. ). They also recognized the necessity for regular bowel movements and acknowledged that children with HD can get more ill from stomach flu than other children. Awareness about the existence of a national patient association was limited. Health literacy, eHEALS and GSES scores The average HLQ-p scores were above the critical low thresholds, with the highest scores in the domains “understanding health information well enough to know what to do” (domain 9) and “active engagement” (domain 6) (Table ). The lowest scores were observed in the domains “feeling that HCP understands and supports my child’s situation” (domain 1), “appraisal of health information” (domain 5) and “social support” (domain 4). The parents generally demonstrated high eHL scores with 82% of the parents having a total score of > 3 points (maximum score 5), suggesting good ability to use electronic resources to manage their child’s health. For self-efficacy, the mean GSES score was 3.2 (SD 0.5, maximum score 4) with 69% of scoring high, defined as score > 2. GSES scores were comparable between mothers and fathers (mean score 3.2 versus 3.1, p = 0.5) (Table ). Factors influencing health literacy Higher self-efficacy, living with the child’s other parent, and higher education correlated with higher scores in most HLQ-p domains (Table ). Norwegian-only speakers at home and parents over 40 years also scored higher in certain domains. Parental sex and child-related factors such as the child’s age, time since diagnosis, length of aganglionosis, comorbidity or syndromes showed no correlation with HLQ-p scores and were therefore excluded from the multivariate regression analysis. In summary, the regression analysis revealed that parental age, language spoken at home, education and living arrangements significantly influenced HL scores (Table , supplement). Parents over 40 years scored higher in understanding health information and managing their child’s health (domains 2 and 9, St. β 0.2). Norwegian-only speakers scored higher in communication and healthcare system navigation (domains 6 and 7, St. β 0.3). Higher education correlated with higher scores in all domains (St. β 0.2 to 0.5). When including living arrangements (Step 2), parents living together scored higher in most domains except in domain 7, navigation and 3, active management (St. β 0.2 to 0.5). Meanwhile, higher education remained significant for all domains except in domain 3, active management (St. β 0.2 to 0.4). Norwegian-only parents continued to score higher in communication and navigation (domains 6 and 7, St. β 0.3 to 0.4). When adding the GSES score (Step 3), higher self-efficacy correlated with higher scores across all HLQ-p domains (St. β 0.2 to 0.7). Cohabitating parents still scored higher in HCP support and communication (domains 1 and 6, St. β 0.3, 0.5), social support (domain 4, St. β 0.5), critical appraisal (domain 5, St. β 0.2) and finding and understanding health information (domain 8 and 9, St. β 0.3, 0.4). The final model explained 20–50% of the variance in the HLQ-p scales.
Parents of 91/130 (70%) children completed the questionnaires. The median age of the children was 8 (0–15) years (Table ). We received 132 parent responses, of which 79 (60%) were from mothers. Responses included 44 cases where both parents participated. In addition, one parent responded for 3 siblings, and another parent responded for 2 siblings. For the remaining 42 children, one parent responded to the child, providing a total number of 132 responses. Mean parental age was 39.8 years (SD 6.8), with no significant age difference between fathers and mothers (41.1 versus 38.9 years, p = 0.7). Most parents lived with the other parent of the child (79.5%), had higher education (52%), and worked full time (79%). 36% of the parents spoke another language or combined another language with Norwegian at home. Of the children, 75% were male, 74% had short segment HD and 28% had additional comorbidity, Down syndrome being the most common (13%).
The results from the HD study-specific questionnaire indicated that most parents had a good general knowledge about the congenital nature and rarity of HD (Fig. ). They also recognized the necessity for regular bowel movements and acknowledged that children with HD can get more ill from stomach flu than other children. Awareness about the existence of a national patient association was limited.
The average HLQ-p scores were above the critical low thresholds, with the highest scores in the domains “understanding health information well enough to know what to do” (domain 9) and “active engagement” (domain 6) (Table ). The lowest scores were observed in the domains “feeling that HCP understands and supports my child’s situation” (domain 1), “appraisal of health information” (domain 5) and “social support” (domain 4). The parents generally demonstrated high eHL scores with 82% of the parents having a total score of > 3 points (maximum score 5), suggesting good ability to use electronic resources to manage their child’s health. For self-efficacy, the mean GSES score was 3.2 (SD 0.5, maximum score 4) with 69% of scoring high, defined as score > 2. GSES scores were comparable between mothers and fathers (mean score 3.2 versus 3.1, p = 0.5) (Table ).
Higher self-efficacy, living with the child’s other parent, and higher education correlated with higher scores in most HLQ-p domains (Table ). Norwegian-only speakers at home and parents over 40 years also scored higher in certain domains. Parental sex and child-related factors such as the child’s age, time since diagnosis, length of aganglionosis, comorbidity or syndromes showed no correlation with HLQ-p scores and were therefore excluded from the multivariate regression analysis. In summary, the regression analysis revealed that parental age, language spoken at home, education and living arrangements significantly influenced HL scores (Table , supplement). Parents over 40 years scored higher in understanding health information and managing their child’s health (domains 2 and 9, St. β 0.2). Norwegian-only speakers scored higher in communication and healthcare system navigation (domains 6 and 7, St. β 0.3). Higher education correlated with higher scores in all domains (St. β 0.2 to 0.5). When including living arrangements (Step 2), parents living together scored higher in most domains except in domain 7, navigation and 3, active management (St. β 0.2 to 0.5). Meanwhile, higher education remained significant for all domains except in domain 3, active management (St. β 0.2 to 0.4). Norwegian-only parents continued to score higher in communication and navigation (domains 6 and 7, St. β 0.3 to 0.4). When adding the GSES score (Step 3), higher self-efficacy correlated with higher scores across all HLQ-p domains (St. β 0.2 to 0.7). Cohabitating parents still scored higher in HCP support and communication (domains 1 and 6, St. β 0.3, 0.5), social support (domain 4, St. β 0.5), critical appraisal (domain 5, St. β 0.2) and finding and understanding health information (domain 8 and 9, St. β 0.3, 0.4). The final model explained 20–50% of the variance in the HLQ-p scales.
The main finding of this study exploring HL in parents of HD children is that the parents generally have good knowledge about the disease, but struggle with social and emotional aspects of caring for their child. A comprehensive study on HL in parents of children with HD has not been conducted previously, and our results offer several new insights. Parents reported a lack of social support related to their child’s HD. We do not know the reasons for this but hypothesize that the stigma associated with defecation problems and the rarity of HD contribute to the sense of isolation . Furthermore, Norway’s geography makes finding peers and support networks locally challenging. Besides, only half of the parents in this study were aware of the HD patient association, suggesting a possible source for peer support and shared experiences may be underutilized. Previous research has found similar issues among HD families , with one study stressing parents’ lack of self-efficacy in seeking social support when caring for a child with HD . Parents of children with anorectal malformation (ARM) experience similar psychosocial burdens , indicating a need for accessible support systems. Nevertheless, HCP should inform families about patient groups and support networks. Parents generally perceived a lack of support and understanding from HCP about the child’s situation, which is surprising as our center offers HD families direct contact with their care team, including stoma nurses, and patients are routinely followed until age 18 with a transition consultation to prepare for adult healthcare systems. The reasons for this perception are not clear, but it is possible that specialized HD professionals unintentionally make parents feel overlooked in their efforts to normalize the condition and reduce over-medicalization. Additionally, some parents found interactions with general practitioners and emergency room staff challenging due to their unfamiliarity with HD, leading to difficulties in symptom interpretation and appropriate treatment. Effective family-centered care requires HCP to provide parents with appropriate information, discuss treatment options and value their preferences and concerns . Improving these aspects is crucial in building trust and ensuring parents and their children feel supported and informed. HD parents struggled with evaluating the quality and relevance of health information. This may contribute to their perception of being less capable of managing their child’s condition compared to parents of children with other chronic illnesses . Since HD management is different for every child, parents need to adapt advice to their child´s specific needs, which requires critical HL skills . If HCP acknowledge these challenges, they can give better support and help families feel confident. Sociodemographic factors influencing health literacy Parental sex did not influence HL levels in this study. Some research suggests fathers are less engaged in health services than mothers . However, one study found higher communicative HL in fathers, although, they were also more educated than the mothers . Our study is unique due to the high participation of fathers, possibly reflecting Norway’s emphasis on equal parental rights and social-gender equality. The similar HL levels in mothers and fathers may reflect mutual involvement in caring for a child with HD. Nevertheless, parental collaboration is crucial in alleviating the adverse impacts of chronic conditions on a child’s overall well-being . We found that younger HD parents had lower HL, aligning with some, but not all studies on parental HL . Interestingly, time since HD diagnosis (a measure of experience) did not influence HL levels, suggesting that age, rather than experience, plays a role in enhancing HL. This may be due to parental experience and maturity and implies that young parents need extra support. The finding that lower education predicts low HL is expected and consistent with global literature . One study linked reduced HL to lower socioeconomic status, revealing barriers to care access and shared decision-making for those parents . Academic education likely improves HL through accumulated knowledge and skills . However, higher education does not guarantee high HL as many highly educated parents also had HL challenges. Parents not living with the child’s other parent had more HL challenges. Research on social determinants for health in HD found that parental marital status affected a child’s risk of developing Hirschsprung-associated enterocolitis . Similarly, unmarried maternal status has been linked to increased birth-related risks . These findings underscore the need to consider family structure in HD management, suggesting targeted interventions for HL challenges in diverse family settings. Language barriers and cultural disparities are known to complicate communication and HL and may even affect postoperative outcomes . Immigrants and their Norwegian-born children make up roughly 20% of Norway’s population, and significant HL disparities exist among these communities . Parents who spoke only Norwegian at home had better engagement with HCP and understanding of healthcare systems compared to those who also spoke another language. This suggests that even proficient Norwegian-speaking bilingual parents may have HL challenges related to language and that the use of interpreter services is crucial. Besides, excluding non-Norwegian-speaking parents likely skews our findings towards higher HL. No child-related factors influenced parental HL. Since children with complex HD or comorbidity have more interactions with healthcare, we expected their parents to have increased HL. However, having a child with comorbidity, long-segment HD, permanent stoma or appendicostomy showed no link to improved HL. Research has not conclusively established a relationship between comorbidities and parental HL, and one study in fact linked comorbidity to lower HL . Comorbidities may require parents to comprehend diverse information, potentially challenging their HL skills. Our results point out self-efficacy as a strong predictor of parental HL, consistent with existing research in various pediatric patient groups . Enhancing self-efficacy through tailored interventions like education, mastery classes, and support networks could effectively improve HL in HD parents. Furthermore, parents demonstrated high levels of eHL, similar to findings among Swedish parents of children needing surgical care , suggesting eHL interventions could be effective. Electronic resources can provide accessible, tailored information, enabling informed decisions and active participation in their child’s care . Strengths and weaknesses An important strength of the study is the authentic representation of the parent population. Oslo University Hospital treats about 80% of HD patients in Norway, and we evaluated HL in parents of 70% of these children. Families not included are those living in the northern part of Norway, typically with longer distances to the local hospital. Additionally, the study includes a substantial number of fathers and non-native speakers. Offering both online and paper surveys ensured diverse eHL levels. Another strength lies in the use of validated tools. Weaknesses involve the cross-sectional design lacking long-term follow-up, the relatively small population limiting advanced statistical analysis, lack of data on non-responders, and insufficient information on non-Norwegian-speaking parents’ HL. Lastly, the study-specific questionnaire has not undergone formal validation, so we cannot be certain that it accurately measures parents’ knowledge about HD.
Parental sex did not influence HL levels in this study. Some research suggests fathers are less engaged in health services than mothers . However, one study found higher communicative HL in fathers, although, they were also more educated than the mothers . Our study is unique due to the high participation of fathers, possibly reflecting Norway’s emphasis on equal parental rights and social-gender equality. The similar HL levels in mothers and fathers may reflect mutual involvement in caring for a child with HD. Nevertheless, parental collaboration is crucial in alleviating the adverse impacts of chronic conditions on a child’s overall well-being . We found that younger HD parents had lower HL, aligning with some, but not all studies on parental HL . Interestingly, time since HD diagnosis (a measure of experience) did not influence HL levels, suggesting that age, rather than experience, plays a role in enhancing HL. This may be due to parental experience and maturity and implies that young parents need extra support. The finding that lower education predicts low HL is expected and consistent with global literature . One study linked reduced HL to lower socioeconomic status, revealing barriers to care access and shared decision-making for those parents . Academic education likely improves HL through accumulated knowledge and skills . However, higher education does not guarantee high HL as many highly educated parents also had HL challenges. Parents not living with the child’s other parent had more HL challenges. Research on social determinants for health in HD found that parental marital status affected a child’s risk of developing Hirschsprung-associated enterocolitis . Similarly, unmarried maternal status has been linked to increased birth-related risks . These findings underscore the need to consider family structure in HD management, suggesting targeted interventions for HL challenges in diverse family settings. Language barriers and cultural disparities are known to complicate communication and HL and may even affect postoperative outcomes . Immigrants and their Norwegian-born children make up roughly 20% of Norway’s population, and significant HL disparities exist among these communities . Parents who spoke only Norwegian at home had better engagement with HCP and understanding of healthcare systems compared to those who also spoke another language. This suggests that even proficient Norwegian-speaking bilingual parents may have HL challenges related to language and that the use of interpreter services is crucial. Besides, excluding non-Norwegian-speaking parents likely skews our findings towards higher HL. No child-related factors influenced parental HL. Since children with complex HD or comorbidity have more interactions with healthcare, we expected their parents to have increased HL. However, having a child with comorbidity, long-segment HD, permanent stoma or appendicostomy showed no link to improved HL. Research has not conclusively established a relationship between comorbidities and parental HL, and one study in fact linked comorbidity to lower HL . Comorbidities may require parents to comprehend diverse information, potentially challenging their HL skills. Our results point out self-efficacy as a strong predictor of parental HL, consistent with existing research in various pediatric patient groups . Enhancing self-efficacy through tailored interventions like education, mastery classes, and support networks could effectively improve HL in HD parents. Furthermore, parents demonstrated high levels of eHL, similar to findings among Swedish parents of children needing surgical care , suggesting eHL interventions could be effective. Electronic resources can provide accessible, tailored information, enabling informed decisions and active participation in their child’s care .
An important strength of the study is the authentic representation of the parent population. Oslo University Hospital treats about 80% of HD patients in Norway, and we evaluated HL in parents of 70% of these children. Families not included are those living in the northern part of Norway, typically with longer distances to the local hospital. Additionally, the study includes a substantial number of fathers and non-native speakers. Offering both online and paper surveys ensured diverse eHL levels. Another strength lies in the use of validated tools. Weaknesses involve the cross-sectional design lacking long-term follow-up, the relatively small population limiting advanced statistical analysis, lack of data on non-responders, and insufficient information on non-Norwegian-speaking parents’ HL. Lastly, the study-specific questionnaire has not undergone formal validation, so we cannot be certain that it accurately measures parents’ knowledge about HD.
Parents of children with HD feel HCP lack understanding of their child’s challenges, experience limited social support and struggle with health information interpretation. HCP should address these barriers and offer targeted HL efforts to young, lower-educated, non-cohabitating parents, and to those who do not primarily speak the official language at home. Understanding these factors can guide tailored HL interventions to specific groups.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 20 KB)
|
Patient perceived value of teleophthalmology in an urban, low income US population with diabetes | e4538370-56ae-43f7-9227-f31cedcd47e2 | 6952085 | Ophthalmology[mh] | Affecting almost 10% of the US population, diabetes mellitus is a growing pandemic, with a third having diabetic retinopathy, the leading cause of blindness in the working age population . Annual dilated eye exams are the standard of care to detect advancing, vision threatening, but often asymptomatic retinopathy, in a timely fashion , allowing for treatments that preserve and improve vision . Unfortunately, at best, only 60–70% of those with diabetes have an annual dilated eye exam. This percentage averages between 25–35% for low income populations in under resourced settings . Teleophthalmology is the innovative utilization of nonmydriatic fundus camera-based retinopathy examination in non-eye care settings, including primary care offices, using eye doctors to remotely grade images and recommend appropriate follow-up eye care. Although such programs have tremendously improved annual examination rates for retinopathy in low income and under resourced settings [ – ], widespread adoption of such examination technology and processes have yet to occur. Research on patients’ attitudes about teleophthalmology is limited. Understanding how patients perceive the value of using teleophthalmology programs to screen for retinopathy and assess vision in non-eye care settings is important for developing successful programs and increasing its adoption . Qualitative assessment of patient experiences with teleophthalmology through focus groups and interviews allows for improved design and implementation of such programs by understanding local consumer needs [ – ]. Conducting focus groups and qualitative analysis to elicit patient experiences and obtain candid perspectives of their health and health care yield richer insights into local community beliefs that influence adoption of health practices than quantitative questionnaires or surveys alone [ – ]. Cost and access have been identified as the two main barriers of obtaining dilated eye exams from focus groups assessing the knowledge, attitudes, and beliefs of patients with diabetes in urban and rural settings in the US [ , , ]. None of these studies addressed the use of teleophthalmology to examine eyes for diabetic retinopathy. The majority of work evaluating patient experiences with teleophthalmology has focused on international populations using quantitative surveys [ – ]. One qualitative study from the UK found that participants value teleophthalmology if they understand why it is being recommended and if it is convenient and accessible by safe transportation . One of the few studies to assess teleophthalmology users in the US noted that patients may not understand the reason for these examinations . A survey of US Veterans with diabetes also found that convenience was a key factor in favoring teleophthalmology. However, this cohort had not actually experienced teleophthalmology . A recently published study identified barriers and facilitators of teleophthalmology among rural, white Caucasian patients in Wisconsin who had experienced this type of examination found that the convenience of teleophthalmology was a key facilitator, whereas not knowing enough about teleophthalmology was a key barrier for having such an exam . Our study investigates how patients value having a teleophthalmology examination offered in urban US primary care provider (PCP) practices serving low income, minority patients. We include the perspective of those who have and who have not undergone such a teleophthalmology exam using qualitative analysis.
This study was approved by University of Rochester’s Research Subjects Review Board as an exempt study (RSRB00065090). The ethics committee approved the verbal consent procedure and did not require written consent due to the nature and the activities of the study. All participants provided informed verbal consent for their participation and for the audio-recordings during initial phone contact to schedule an interview or a focus group. Participants received $25 cash for their participation in the focus group or interview as well as bus tokens as needed for transportation. Setting Two primary care settings serving low income, largely minority, inner city populations in Rochester, NY implemented teleophthalmology programs in conjunction with a local University-based ophthalmology department in 2013 and 2015, respectively. One clinic was in a health system outside the University system. This clinic was hospital based, with approximately 2100 patients with diabetes. The other clinic, owned by the University, was located in a neighborhood setting and had about 500 patients with diabetes. The teleophthalmology program used a Zeiss Visucam NM PRO (Carl Ziess Meditec, Dublin, CA, USA) nonmydriatic fundus camera in the hospital clinic and the Topcon NW400 (Topcon Medical Systems, Inc, Oakland, NJ, USA) nonmydriatic fundus camera in the neighborhood clinic to take three standard fields and one anterior segment photo of each eye. Both clinics used Snellen visual acuity charts to examine patients with diabetes for vision loss without dilating their eyes. Patients without a documented eye exam (per HEDIS criteria) were identified and slated for a teleophthalmology based exam either at their next PCP visit or were scheduled for a diabetic management nurse visit where they received the teleophthalmology exam. A patient care technician or nurse obtained identifying information, assessed visual acuity, and took monoscopic digital photos of the eye. The latter were uploaded to a secure cloud-server. After the images were read by a single ophthalmologist (RSR) from the university eye institute within 1-day, electronic reports describing presence of any disease and visual acuity were uploaded to the cloud-server. If the images were not of sufficient quality to grade for disease, about 8% of cases, a notation stating this was recorded in the report and the patient was recommended to see an eye care provider within-3 months. Concurrently, an e-mail notification that the report was available was sent to the clinic’s contact person. Once downloaded from the web portal it was added to the electronic medical record (EMR). These results and recommended follow-up duration for an eye doctor visit were shared with the patient via phone within a few days. Patients did not get billed for this program. Participants Participants were recruited in 2017 using convenience sampling from the 2 primary care clinic settings. Eligible participants were identified by clinic staff through a review of EMRs as having diabetes and being someone who would be medically and cognitively fit to interview or participate in a focus group conducted in English. They also either had a dilated eye exam, been assessed via teleophthalmology, or had not seen an eye doctor in at least the last two years. The clinic staff asked eligible participants if they would be interested in the study either in person or by phone. Interested participants were contacted via phone by the study staff to schedule a convenient time for a focus group or an interview, but not both. Eligible participants were at least 18 years old and had diabetes. Individuals were excluded if they did not speak, read and write English, or reported that they were legally blind when asked during a phone screening interview. The focus groups and interviews were conducted in English and participants needed to be able to see how a digital camera could take a picture of the retina as depicted in an on-line video. Out of the 90 patients identified by the clinic who were reachable and eligible for a prescreening phone interview, 42 agreed to participate, and 23 participated and completed the study. Based on their utilization of eye exam, participants were categorized into the following groups: experience with teleophthalmology (n = 7) or no experience with teleophthalmology (n = 16). A third group (those who had not had a dilated eye exam in the last two years and had not had experience with teleophthalmology) was identified by clinic staff as potential subjects, but none of these patients participated in a focus group or were interviewed when asked. Detailed methods are reported using the COREQ checklist . Data collection Semi-structured interviews and 2 focus groups were conducted from April to July 2017 by a facilitator and a research assistant in the two primary care settings, participants’ homes, or another location as preferred by the participant. The focus groups and interviews were conducted by two female master’s degree holding doctoral students in human development who had experience conducting focus groups, interviews, and performing qualitative and qualitative assessments in previous clinically oriented research studies. There was no prior relationship between the focus group or interview facilitators and study participants. At the start of the focus group or interview, the facilitators discussed the study purpose, their credentials, and role. Family members of patients could be present but could not participate in the focus groups or interviews. Each interview/focus group lasted approximately 45–60 minutes, facilitated by an interview guide (on-line supplementary appendix ). Participants agreed to be audio-recorded. These were transcribed by a professional transcriptionist. Transcriptions and field notes taken during and after the focus groups and interviews were used in the data analysis. Data collection continued until data saturation was reached. In both the interviews and focus groups, participants first completed a brief (~10-minutes) self-administered survey [All relevant data underlying this study are within the paper. The full survey data can be found at https://doi.org/10.5281/zenodo.3550069 ] in English with open and closed ended items that were derived from the behavioral risk factors survey study and previously published literature on the perception and satisfaction of teleophthalmology programs and obtaining dilated eye exams among patients with diabetes , but was not pilot tested. The survey including the following sections and is detailed in supplemental material: Demographic information (7-items), health information (6-items), personal views on the importance of eye care, having a dilated eye exam (5-items), and perceived value of teleophthalmology (9 in dilated exam group and 7-items in teleophthalmology group) ( ). Before completing the teleophthalmology focused section, participants were shown a 3-minute video ( https://youtu.be/URqAoD3oap4 ) on teleophthalmology based examination for diabetic retinopathy similar to the program implemented for our population. They were informed that 1) the intervention served as a limited examination to promptly detect eye disease with diabetic patients, and that 2) it did not replace a comprehensive diabetic eye exam that they would receive from an eye doctor but was a recommended alternative if they could not or had not seen an eye doctor for a dilated eye exam in the past year. Participants who experienced teleophthalmology completed two questions specific to their experience. Participants in the dilated eye exam only group (i.e., no experience with teleophthalmology) were asked about their comfort with using teleophthalmology if it were to be offered by their primary care office and if they would ask their PCP about the teleophthalmology program. Both groups were asked “if you had to pay for the camera-screening out of pocket, how much would you be willing to pay?” Upon completing the survey, responses to sections on personal views and perceived value of dilated eye exams and teleophthalmology were the basis for discussion among participants where they shared their views with the group. While the survey responses for willingness to pay (WTP) had specific dollar values corresponding to the standard insurance co-pays for the local patient population seen in the clinics, subsequent discussion elicited more detail on what participants were willing to pay. Data analysis Participant demographics were analyzed using means and standard deviation for continuous variables and frequencies and percentages for categorical variables. Group differences were assessed using ANOVA for continuous variables and chi-square for categorical or Fisher’s exact for smaller sample groupings. A p-value of less than 0.05 was considered to be statistically significant. SPSS (version 24) was used for quantitative data analysis. Open-ended questions and transcribed data from focus groups and interviews were coded using thematic analysis by two of the authors (RSR, SY). This process involved identifying passages linked to the questions asked in sections 3 and 4 of the survey. First, each coder individually evaluated each response on a line-by-line basis circling key phrases that corresponded with patient perspectives pertinent to the discussion. Then the coder looked for how they were grouped by relevant themes. After the individual coding process, the coders met and reviewed each theme for agreements/disagreements. The disagreements were addressed by going back to the data and recoding it as a group.
Two primary care settings serving low income, largely minority, inner city populations in Rochester, NY implemented teleophthalmology programs in conjunction with a local University-based ophthalmology department in 2013 and 2015, respectively. One clinic was in a health system outside the University system. This clinic was hospital based, with approximately 2100 patients with diabetes. The other clinic, owned by the University, was located in a neighborhood setting and had about 500 patients with diabetes. The teleophthalmology program used a Zeiss Visucam NM PRO (Carl Ziess Meditec, Dublin, CA, USA) nonmydriatic fundus camera in the hospital clinic and the Topcon NW400 (Topcon Medical Systems, Inc, Oakland, NJ, USA) nonmydriatic fundus camera in the neighborhood clinic to take three standard fields and one anterior segment photo of each eye. Both clinics used Snellen visual acuity charts to examine patients with diabetes for vision loss without dilating their eyes. Patients without a documented eye exam (per HEDIS criteria) were identified and slated for a teleophthalmology based exam either at their next PCP visit or were scheduled for a diabetic management nurse visit where they received the teleophthalmology exam. A patient care technician or nurse obtained identifying information, assessed visual acuity, and took monoscopic digital photos of the eye. The latter were uploaded to a secure cloud-server. After the images were read by a single ophthalmologist (RSR) from the university eye institute within 1-day, electronic reports describing presence of any disease and visual acuity were uploaded to the cloud-server. If the images were not of sufficient quality to grade for disease, about 8% of cases, a notation stating this was recorded in the report and the patient was recommended to see an eye care provider within-3 months. Concurrently, an e-mail notification that the report was available was sent to the clinic’s contact person. Once downloaded from the web portal it was added to the electronic medical record (EMR). These results and recommended follow-up duration for an eye doctor visit were shared with the patient via phone within a few days. Patients did not get billed for this program.
Participants were recruited in 2017 using convenience sampling from the 2 primary care clinic settings. Eligible participants were identified by clinic staff through a review of EMRs as having diabetes and being someone who would be medically and cognitively fit to interview or participate in a focus group conducted in English. They also either had a dilated eye exam, been assessed via teleophthalmology, or had not seen an eye doctor in at least the last two years. The clinic staff asked eligible participants if they would be interested in the study either in person or by phone. Interested participants were contacted via phone by the study staff to schedule a convenient time for a focus group or an interview, but not both. Eligible participants were at least 18 years old and had diabetes. Individuals were excluded if they did not speak, read and write English, or reported that they were legally blind when asked during a phone screening interview. The focus groups and interviews were conducted in English and participants needed to be able to see how a digital camera could take a picture of the retina as depicted in an on-line video. Out of the 90 patients identified by the clinic who were reachable and eligible for a prescreening phone interview, 42 agreed to participate, and 23 participated and completed the study. Based on their utilization of eye exam, participants were categorized into the following groups: experience with teleophthalmology (n = 7) or no experience with teleophthalmology (n = 16). A third group (those who had not had a dilated eye exam in the last two years and had not had experience with teleophthalmology) was identified by clinic staff as potential subjects, but none of these patients participated in a focus group or were interviewed when asked. Detailed methods are reported using the COREQ checklist .
Semi-structured interviews and 2 focus groups were conducted from April to July 2017 by a facilitator and a research assistant in the two primary care settings, participants’ homes, or another location as preferred by the participant. The focus groups and interviews were conducted by two female master’s degree holding doctoral students in human development who had experience conducting focus groups, interviews, and performing qualitative and qualitative assessments in previous clinically oriented research studies. There was no prior relationship between the focus group or interview facilitators and study participants. At the start of the focus group or interview, the facilitators discussed the study purpose, their credentials, and role. Family members of patients could be present but could not participate in the focus groups or interviews. Each interview/focus group lasted approximately 45–60 minutes, facilitated by an interview guide (on-line supplementary appendix ). Participants agreed to be audio-recorded. These were transcribed by a professional transcriptionist. Transcriptions and field notes taken during and after the focus groups and interviews were used in the data analysis. Data collection continued until data saturation was reached. In both the interviews and focus groups, participants first completed a brief (~10-minutes) self-administered survey [All relevant data underlying this study are within the paper. The full survey data can be found at https://doi.org/10.5281/zenodo.3550069 ] in English with open and closed ended items that were derived from the behavioral risk factors survey study and previously published literature on the perception and satisfaction of teleophthalmology programs and obtaining dilated eye exams among patients with diabetes , but was not pilot tested. The survey including the following sections and is detailed in supplemental material: Demographic information (7-items), health information (6-items), personal views on the importance of eye care, having a dilated eye exam (5-items), and perceived value of teleophthalmology (9 in dilated exam group and 7-items in teleophthalmology group) ( ). Before completing the teleophthalmology focused section, participants were shown a 3-minute video ( https://youtu.be/URqAoD3oap4 ) on teleophthalmology based examination for diabetic retinopathy similar to the program implemented for our population. They were informed that 1) the intervention served as a limited examination to promptly detect eye disease with diabetic patients, and that 2) it did not replace a comprehensive diabetic eye exam that they would receive from an eye doctor but was a recommended alternative if they could not or had not seen an eye doctor for a dilated eye exam in the past year. Participants who experienced teleophthalmology completed two questions specific to their experience. Participants in the dilated eye exam only group (i.e., no experience with teleophthalmology) were asked about their comfort with using teleophthalmology if it were to be offered by their primary care office and if they would ask their PCP about the teleophthalmology program. Both groups were asked “if you had to pay for the camera-screening out of pocket, how much would you be willing to pay?” Upon completing the survey, responses to sections on personal views and perceived value of dilated eye exams and teleophthalmology were the basis for discussion among participants where they shared their views with the group. While the survey responses for willingness to pay (WTP) had specific dollar values corresponding to the standard insurance co-pays for the local patient population seen in the clinics, subsequent discussion elicited more detail on what participants were willing to pay.
Participant demographics were analyzed using means and standard deviation for continuous variables and frequencies and percentages for categorical variables. Group differences were assessed using ANOVA for continuous variables and chi-square for categorical or Fisher’s exact for smaller sample groupings. A p-value of less than 0.05 was considered to be statistically significant. SPSS (version 24) was used for quantitative data analysis. Open-ended questions and transcribed data from focus groups and interviews were coded using thematic analysis by two of the authors (RSR, SY). This process involved identifying passages linked to the questions asked in sections 3 and 4 of the survey. First, each coder individually evaluated each response on a line-by-line basis circling key phrases that corresponded with patient perspectives pertinent to the discussion. Then the coder looked for how they were grouped by relevant themes. After the individual coding process, the coders met and reviewed each theme for agreements/disagreements. The disagreements were addressed by going back to the data and recoding it as a group.
Participant characteristics Participant characteristics are shown in . The 23 participants all had physician diagnosed type 2 diabetes. Seven had undergone teleophthalmology to assess for diabetic retinopathy in their primary care provider’s office (teleophthalmology group). The dilated exam only group consisted of 16 participants who only had a dilated eye exam with an eye doctor to check for diabetic retinopathy. The teleophthalmology group was slightly younger (p < .01), more likely to be employed (p < .05), and less likely to have an eye doctor (p<0.02). Half of both groups reported some difficulty with distance vision, even with glasses. The majority in both groups also reported trouble with reading while wearing reading glasses. Main results Tables and compare the results of open-ended written responses and the subsequent discussion including barriers to obtaining a dilated eye exam, the benefits of teleophthalmology, potential barriers to receiving teleophthalmology, and each participant’s WTP for the teleophthalmology service. The reported results are aggregated as responses were similar between the two groups. Barriers to obtaining dilated eye exams Using surveys followed by facilitated discussion allowed for richer and more varied responses. During the discussion, almost all participants strongly voiced the lack of insurance coverage for medical care, being on a fixed income, and having a limited budget as barriers to obtaining a dilated eye exam. Cost of care and the cost to access care were main themes in all interviews and focus groups. The discussion also highlighted two additional barriers: transportation challenges and being asymptomatic. Participants commented on the difficulty of convenient parking and safety driving post dilation. Many also spoke about ‘forgetting to make an appointment’ or ‘putting off making an appointment’ especially if they did not have visual or eye symptoms. Value of a teleophthalmology exam Participants listed convenience (48%) and the ability to detect disease early to give oneself ‘peace of mind’ by knowing and being educated on the status of one’s eye health (35%) as reasons to have a teleophthalmology exam at their primary care visit. The value of teleophthalmology included its quickness and convenience, a ‘one stop shop.’ In addition, participants acknowledged value in not only giving reassurance that there was no vision threatening retinopathy but also in allowing for early detection of disease so that ‘something could be done about it’ to allow for potential treatment to prevent vision loss. Personalized education from having the provider review the findings in the retinal photos to understand the disease better was also of value. Most respondents reported their WTP as the amount of their usual visit copay for the teleophthalmology exam, but actual costs for the exam were not discussed. More than half indicated that they would be WTP $30 or $40 for the teleophthalmology service on their survey. In this small sample, there was no significant relationship between WTP and type of health care insurance, eye care coverage, or employment status. ( ) While missing primary care appointments or ‘not showing up’ and potential ‘poor customer service’ were noted as potential barriers by a few participants, everyone focused on cost of care as the primary barrier during the discussion. Many emphasized that they would ‘want to know the cost’ of the teleophthalmology examination before deciding to have it done. Participants would be more likely to participate if they knew that their insurance would pay for the service as they were ‘tight on budget’ and living ‘dollar to dollar.’ Despite noting limitations in what they could actually afford, participants expressed value for having eye exams to ensure good vision with a few stating they would ‘pay $100 to $200’ for an exam ‘if [they] could afford it.’ The overall experience of participants who had a teleophthalmology exam was positive. They expressed confidence in primary care staff skills for conducting the examination and labeled it as a ‘helpful service.’ Teleophthalmology fit well in their primary care visit and many stated it was an ‘excellent experience.’ They would recommend teleophthalmology to a friend and would be willing to have such an exam again. In addition, everyone in the dilated exam only group, noted that they would be ‘comfortable’ with having a teleophthalmology based examination at their PCP office. Although three (13%) said they would prefer an in person dilated eye exam with an eye doctor over a teleophthalmology exam, 20 participants (87%) expressed interest in having a teleophthalmology exam at their PCP office if it was recommended by their PCP.
Participant characteristics are shown in . The 23 participants all had physician diagnosed type 2 diabetes. Seven had undergone teleophthalmology to assess for diabetic retinopathy in their primary care provider’s office (teleophthalmology group). The dilated exam only group consisted of 16 participants who only had a dilated eye exam with an eye doctor to check for diabetic retinopathy. The teleophthalmology group was slightly younger (p < .01), more likely to be employed (p < .05), and less likely to have an eye doctor (p<0.02). Half of both groups reported some difficulty with distance vision, even with glasses. The majority in both groups also reported trouble with reading while wearing reading glasses.
Tables and compare the results of open-ended written responses and the subsequent discussion including barriers to obtaining a dilated eye exam, the benefits of teleophthalmology, potential barriers to receiving teleophthalmology, and each participant’s WTP for the teleophthalmology service. The reported results are aggregated as responses were similar between the two groups. Barriers to obtaining dilated eye exams Using surveys followed by facilitated discussion allowed for richer and more varied responses. During the discussion, almost all participants strongly voiced the lack of insurance coverage for medical care, being on a fixed income, and having a limited budget as barriers to obtaining a dilated eye exam. Cost of care and the cost to access care were main themes in all interviews and focus groups. The discussion also highlighted two additional barriers: transportation challenges and being asymptomatic. Participants commented on the difficulty of convenient parking and safety driving post dilation. Many also spoke about ‘forgetting to make an appointment’ or ‘putting off making an appointment’ especially if they did not have visual or eye symptoms. Value of a teleophthalmology exam Participants listed convenience (48%) and the ability to detect disease early to give oneself ‘peace of mind’ by knowing and being educated on the status of one’s eye health (35%) as reasons to have a teleophthalmology exam at their primary care visit. The value of teleophthalmology included its quickness and convenience, a ‘one stop shop.’ In addition, participants acknowledged value in not only giving reassurance that there was no vision threatening retinopathy but also in allowing for early detection of disease so that ‘something could be done about it’ to allow for potential treatment to prevent vision loss. Personalized education from having the provider review the findings in the retinal photos to understand the disease better was also of value. Most respondents reported their WTP as the amount of their usual visit copay for the teleophthalmology exam, but actual costs for the exam were not discussed. More than half indicated that they would be WTP $30 or $40 for the teleophthalmology service on their survey. In this small sample, there was no significant relationship between WTP and type of health care insurance, eye care coverage, or employment status. ( ) While missing primary care appointments or ‘not showing up’ and potential ‘poor customer service’ were noted as potential barriers by a few participants, everyone focused on cost of care as the primary barrier during the discussion. Many emphasized that they would ‘want to know the cost’ of the teleophthalmology examination before deciding to have it done. Participants would be more likely to participate if they knew that their insurance would pay for the service as they were ‘tight on budget’ and living ‘dollar to dollar.’ Despite noting limitations in what they could actually afford, participants expressed value for having eye exams to ensure good vision with a few stating they would ‘pay $100 to $200’ for an exam ‘if [they] could afford it.’ The overall experience of participants who had a teleophthalmology exam was positive. They expressed confidence in primary care staff skills for conducting the examination and labeled it as a ‘helpful service.’ Teleophthalmology fit well in their primary care visit and many stated it was an ‘excellent experience.’ They would recommend teleophthalmology to a friend and would be willing to have such an exam again. In addition, everyone in the dilated exam only group, noted that they would be ‘comfortable’ with having a teleophthalmology based examination at their PCP office. Although three (13%) said they would prefer an in person dilated eye exam with an eye doctor over a teleophthalmology exam, 20 participants (87%) expressed interest in having a teleophthalmology exam at their PCP office if it was recommended by their PCP.
Using surveys followed by facilitated discussion allowed for richer and more varied responses. During the discussion, almost all participants strongly voiced the lack of insurance coverage for medical care, being on a fixed income, and having a limited budget as barriers to obtaining a dilated eye exam. Cost of care and the cost to access care were main themes in all interviews and focus groups. The discussion also highlighted two additional barriers: transportation challenges and being asymptomatic. Participants commented on the difficulty of convenient parking and safety driving post dilation. Many also spoke about ‘forgetting to make an appointment’ or ‘putting off making an appointment’ especially if they did not have visual or eye symptoms.
Participants listed convenience (48%) and the ability to detect disease early to give oneself ‘peace of mind’ by knowing and being educated on the status of one’s eye health (35%) as reasons to have a teleophthalmology exam at their primary care visit. The value of teleophthalmology included its quickness and convenience, a ‘one stop shop.’ In addition, participants acknowledged value in not only giving reassurance that there was no vision threatening retinopathy but also in allowing for early detection of disease so that ‘something could be done about it’ to allow for potential treatment to prevent vision loss. Personalized education from having the provider review the findings in the retinal photos to understand the disease better was also of value. Most respondents reported their WTP as the amount of their usual visit copay for the teleophthalmology exam, but actual costs for the exam were not discussed. More than half indicated that they would be WTP $30 or $40 for the teleophthalmology service on their survey. In this small sample, there was no significant relationship between WTP and type of health care insurance, eye care coverage, or employment status. ( ) While missing primary care appointments or ‘not showing up’ and potential ‘poor customer service’ were noted as potential barriers by a few participants, everyone focused on cost of care as the primary barrier during the discussion. Many emphasized that they would ‘want to know the cost’ of the teleophthalmology examination before deciding to have it done. Participants would be more likely to participate if they knew that their insurance would pay for the service as they were ‘tight on budget’ and living ‘dollar to dollar.’ Despite noting limitations in what they could actually afford, participants expressed value for having eye exams to ensure good vision with a few stating they would ‘pay $100 to $200’ for an exam ‘if [they] could afford it.’ The overall experience of participants who had a teleophthalmology exam was positive. They expressed confidence in primary care staff skills for conducting the examination and labeled it as a ‘helpful service.’ Teleophthalmology fit well in their primary care visit and many stated it was an ‘excellent experience.’ They would recommend teleophthalmology to a friend and would be willing to have such an exam again. In addition, everyone in the dilated exam only group, noted that they would be ‘comfortable’ with having a teleophthalmology based examination at their PCP office. Although three (13%) said they would prefer an in person dilated eye exam with an eye doctor over a teleophthalmology exam, 20 participants (87%) expressed interest in having a teleophthalmology exam at their PCP office if it was recommended by their PCP.
Using a qualitative approach, we found that a low income, urban, largely African American sample of patients with type 2 diabetes greatly valued having PCP based teleophthalmology, would recommend such a service, and were willing to pay at least the amount of their usual copay. Cost was an important influencer of value. We are the first to report on WTP as an indicator of the perceived value of teleophthalmology to patients. Our study also highlights the importance of having a facilitated discussion to qualitatively assess knowledge, beliefs, and attitudes among US low-income predominantly African American patients with diabetes as such discussion allowed for richer and more varied responses than surveys requiring participants to answer questions on their own. We not only identified many of the same barriers to obtaining a dilated eye exam as other US based studies [ – , , , , ], but also demonstrated the value of a teleophthalmology service using nonmydriatic retinal cameras in PCP practices in overcoming such barriers. The most common stated value was convenience and the ability to overcome transportation and time management issue, as noted in other international and US studies, including a recent study of a white Caucasian rural population in Wisconsin [ , , , , , , ]. Other value included ease of use, ability to detect disease before visual symptoms, and the knowledge provided by the photos and technicians about retinopathy and eye disease, which have been only reported thus far in international studies [ – ]. In addition, the use of nonmydriatic cameras without dilating the eye and avoiding temporary vision impairment was seen as a major advantage of teleophthalmology as noted in the recent Wisconsin study . PCP recommendation and stronger PCP-patient relationships were important patient motivators for using teleophthalmology, similar to other studies from Norway and Wisconsin, USA . The cost of care was the major barrier to obtaining dilated eye exams, as seen in other US based studies [ , – , , – ]. Cost was universally cited as a potential barrier to obtaining a teleophthalmology based nonmydriatic camera exam even if conveniently offered in the primary care office. Educating patients on the potential costs and value of having a teleophthalmology based examination versus going to see an eye doctor for a dilated comprehensive eye exam may be helpful to encourage informed discussions on eye care especially in low income, underserved populations. Combining patient preferences and WTP can provide a more holistic picture of value for a health service such as teleophthalmology by incorporating economic evaluations, such as cost-utility analysis . A recent systematic review of economic studies demonstrated increased cost savings for using teleophthalmology for retinal screening in patients with diabetes versus traditional exams with an eye doctor especially in populations with a higher prevalence of diabetic retinopathy, including minority, low income groups included in our study . While teleophthalmology was well received, there were some who expressed a strong preference to see their eye doctor. These individuals were among the older ones in the group. They expressed valuing their relationship with their eye care provider and questioned the level of expertise and thoroughness of exam afforded by the primary care based teleophthalmology, a finding similar to a recent study of US Veterans . Thus, ensuring that patients, especially older adults, are comfortable with the quality and reliability of teleophthalmology is important. A recent study among American Indians demonstrated that although the digital divide may be greater among low income minority groups, younger American Indian adults were more familiar with digital communication and technology and may be more apt to adopt such methods for accessing health care . The participants in our study who had experienced teleophthalmology were also younger than those who just has a dilated eye exam, which may also have influenced its overwhelming acceptance in our study. Strengths of our study include having feedback from those who have used teleophthalmology to evaluate their eyes for diabetic retinopathy. It is also the first known to ask a potential customer’s willingness to pay (WTP) for the teleophthalmology service. Consumer WTP has been studied for other telemedicine services, especially teledermatology, whose store and forward model is similar to the present one used for teleophthalmology . Quicker and more convenient access to the expertise of a dermatologist with increased chance of receiving an accurate diagnosis in a timely fashion were related to higher WTP . Furthermore, the use of a pre-discussion questionnaire followed by facilitated dialogue in our study allowed for richer and more varied responses than either option alone. While many studies have looked at attitudes, beliefs and knowledge around eye care and having a dilated eye exam, especially for underserved US populations, our study is among the few to provide insight on the value and perceptions of teleophthalmology in US low income, minority populations. Limitations of this study include factors pertaining to the composition of our focus groups and interviews and the use of convenience sampling. We also restricted our population to English speakers who were not legally blind. The small number in the teleophthalmology group limited statistical comparisons. Interpreting our participants’ WTP should be done while considering that all participants expressed the importance of an eye exam and had sought eye care within the last two years. We also chose to ask if participants were WTP discrete values from $0 to $40 in our pre-discussion survey, which may have limited our ability to elicit a full range of WTP values. However, encouraging dialogue around their WTP during the discussion found participants’ WTP ranging from $0 to $100–200. Moreover, WTP and what one actually pays may not be the same . Although teleophthalmology was universally seen as valuable by our participants, cost remains a formidable barrier to obtaining such care and to widespread implementation as recently reviewed by Liu at al. . The issue of cost as a barrier to using teleophthalmology for patients and clinics appear to be unique to the US due to its diverse fee for service insurance system, with the exception of the Veterans Affairs Health System. A review of European studies using teleophthalmology to screen for a variety of eye conditions demonstrated substantial cost savings to their national health systems . However, a review of the current state of teleophthalmology in the US by Rathi et al. noted significant gaps in insurance coverage for teleophthalmology among private and government insurers . Further research to test the relation between a population’s price sensitivity and their value for convenience and other benefits provided by teleophthalmology to remotely diagnose eye disease is needed. In addition, the impact of various billing models, including value based and fee for service payments, on the adoption and sustainability of teleophthalmology should be explored. Such research will better elucidate the value of teleophthalmology and help support its use in non-eye care settings for various subsets of potential users.
S1 Text Outline of pre-focus group questionnaire administered in preparation for focus group discussion. (DOCX) Click here for additional data file.
|
Synergistic effect of the anti-PD-1 antibody with blood stable and reduction sensitive curcumin micelles on colon cancer | f26f6eae-e03f-4c81-9387-b92c3d5ce15d | 8118404 | Pharmacology[mh] | Introduction Curcumin, a bright yellow lipophilic polyphenol derived from the Curcuma longa plants, is well known for its antitumor potential, as it is nontoxic and possesses versatile biological activities, including anti-oxidant, anti-inflammatory, anti-proliferative, and anti-angiogenesis (Dhillon et al., ; Aggarwal & Harikumar, ; Basnet & Skalko-Basnet, ; Kanai et al., ). In addition, curcumin has exhibited its potential in overcoming multidrug resistance and a synergistic effect with other anticancer agents for reducing toxicity and improving efficacy in some preclinical models (Verma et al., ; Khafif et al., ; Tang et al., ; Weir et al., ; Ganta & Amiji, ; Hu & Zhang, ). Interest in the therapeutic application of curcumin in cancer therapy has led to extensive investigations. A few formulations of redox-responsive curcumin nanoparticles were prepared for tumor treatment (Cao et al., ; Meng et al., ; Wang et al., ). Unfortunately, the clinical translation of curcumin as an anticancer agent has been severely limited. Promising therapeutic effects of curcumin have been observed in vitro ; however, its efficacy in vivo is usually inadequate and does not reflect the in vitro results. In recent years, the putative anticancer properties of curcumin have resulted in several clinical trials against various tumors and in some cases positive trends that warrant further study were documented. However, limited success has been achieved in humans (Nelson et al., ), mainly because of its low bioavailability and confusing indications. First, curcumin is water insoluble and undergoes rapid transformation at physiological conditions, resulting in poor stability, rapid elimination and metabolism, limited cellular uptake, and minimal bioavailability (Sharma et al., ; Anand et al., ). On the other hand, although curcumin was reported to suppress various types of cancers including pancreas, prostate, leukemia, bladder, etc., no significant benefits have been confirmed by double blinded, placebo controlled clinical trials (Nelson et al., ). The indication of curcumin to play a role in cancer therapy is still ambiguous. Over the past few years, the field of cancer immunotherapy has entered a new and exciting era, spurred by the extended understanding on the complex relationship between the tumor and the immune system (Robert et al., ). Human carcinoma cells can activate intrinsic programmed cell death in lymphocytes interacting with the tumor (Philips & Atkins, ), which avoids immune recognition as well as elimination and promotes tumor growth and metastasis. Modern designed immunotherapeutic agents intend to stimulate immune responses, such that antibodies targeting either programmed-death-protein-1 receptor (PD-1) or its ligand (PD-L1) have stimulated significant antitumor activity with considerably less toxicity (Philips & Atkins, ; Alsaab et al., ). A major advantage of these agents is the long-lasting clinical benefit, while the setback is that so far only a prospectively unidentified proportion of patients (approximately 25%) with solid tumors experiences clinical benefits. Unfortunately, some types of tumors, such as bladder and head and neck cancer, are hardly sensitive to immunotherapy. Curcumin inhibits myeloid-derived suppressor cells (MDSCs) in the spleen and tumor tissues, which strongly impair the T-cell function and contribute to immune suppression (Tu et al., ). Previous reports also have shown a strong immunomodulatory capability of curcumin by improving the status of T lymphocytes in peripheral blood restricts tumor-induced loss of thymic T cells in tumor-bearing mice (Bhattacharyya et al., ; Chang et al., ; Zhao et al., ). In addition, curcumin improves immunotherapeutic activities of vaccine to late stage tumors through breaking down the innate and adaptive system barriers and reversing the immunosuppressive tumor microenvironment in an advanced melanoma model (Lu et al., ). We hypothesize that curcumin may synergize the therapeutic intervention of immunotherapeutic agents through various mechanisms. In the present study, a superior blood stable and reduction sensitive curcumin micellar formulation was designed and prepared, in order to increase the bioavailability of curcumin as a means to enhance its biological activities. Curcumin was encapsulated in the disulfide crosslinked core of the micelles and its stability both in vitro and in vivo was assessed. Disulfide crosslinking was employed to confer the triggered release to the micelles. Subsequently, in vivo antitumor efficacy with the anti-PD-1 antibody was evaluated .
Materials and methods 2.1. Materials Curcumin was supplied by Soochow Nanomedicine Company (Soochow, China) with a purity of >99.5%. Methoxy poly(ethylene glycol) (mPEG, M n = 2000 g/mol, PDI = 1.03), d , l -lactide, stannous octoate (Sn(Oct) 2 ), l -glutathione (GSH), glutathione monoethyl ester (GSH-OEt), and dithiothreitol (DTT) were purchased from Sigma-Aldrich (Milwaukee, WI). Acetonide-2,2-dimethylolpropanoic anhydride (Ac-DMPA, 1) was synthesized according to a previous report (Gillies & Frechet, ). Dowex H + resin (200–400 mesh), 4-pyrrolidinopyridine (4-py), and pivaloyl chloride were purchased from Acros (Beijing, China). Triethylamine (TEA), ethyl acetate, ether, anhydrous dichloromethane and ethanol were purchased from Shanghai Titan Scientific Co., Ltd. (Shanghai, China). Anti-PD-1 antibody (PD-1) was purchased from Wuxi AppTec Co. Ltd. (Shanghai, China). All other chemicals were of analytical grade from Sinopharm Chemical Regent Co. Ltd. (Shanghai, China) and used without further purification. 2.2. Cells and animals Normal human colon epithelial cell line (NCM460) was purchased from INCELL Corporation (San Antonio, TX) and cultured in M3 media (San Antonio, TX) supplemented with 10% fetal bovine serum (FBS), 1% antibiotics (100 U/mL penicillin and 100 µg/mL streptomycin) (Burlington, Canada) at 37 °C in a humidified 5% CO 2 atmosphere. Murine colon adenocarcinoma cells (MC-38) were obtained from American Type Culture Collection (ATCC, Manassas, VA) and cultured in Roswell Park Memorial Institute (RPMI) 1640 medium (Invitrogen GmbH, Karlsruhe, Germany) containing 10% FBS and 1% antibiotics at 37 °C in a humidified 5% CO 2 atmosphere. Male and female evenly Sprague-Dawley rats (SD rats) with specific pathogen free (SPF) grade (250 ± 5 g) and C57BL/6 mice (21 ± 2 g, 4–6 weeks old) were purchased from Vital River (Beijing, China). All animals were housed in a room maintained at 23 ± 3 °C, 65–75% humidity, with a controlled 12 h light–dark cycle for 5–7 days before experiments. Water and commercial laboratory complete food for animals were available. All animal procedures were conducted following the protocol approved by the Institutional Animal Care and Use Committee of Mabspace Biosciences Co. (Soochow, China). 2.3. Characterization The molecular weight and polydispersity of the synthesized polymers were determined by a Waters 1515 gel permeation chromatographic (GPC) instrument equipped with a differential refractive-index detector. The measurements were performed using tetrahydrofuran (THF) as the eluent at a flow rate of 1.0 mL/min at 30 °C and polystyrene standards for the calibration of the columns. 1 H nuclear magnetic resonance ( 1 H NMR) spectra were obtained in deuterated chloroform (CDCl 3 ) for validating the chemical structure and calculating the functionality of terminal hydroxyl groups in mPEG-PLA-(OH) 4 with lipoic acid using a Bruker NMR spectrometer (AVANCE III, 500 MHz, Billerica, MA) at 25 °C. The morphologies of the curcumin-loaded micelles before and after crosslinking were examined using a JEM-2100 transmission electron microscope (TEM, JEOL, Tokyo, Japan). The thermal properties of the polymers were characterized on a differential scanning calorimeter (DSC, DSC-SP, Rheometric Scientific, Piscataway, NJ) through a heating cycle from 20 to 100 °C under nitrogen atmosphere at 10 °C/min and the curves were recorded for the second run. The mean diameter and size distribution of the micelles were determined using dynamic light scattering (DLS, Zetasizer Nano-ZS, Malvern Instruments, Malvern, UK). The measurements were performed in triplicate. The curcumin concentrations in the micelle dispersions were quantified at 25 °C using Agilent 1260 high-performance liquid chromatography (HPLC) (Agilent Technologies, Santa Clara, CA). The wavelength of the detector was 425 nm. The eluent was a mixture of acetonitrile/water (75/25) and the micelle dispersion was diluted with acetonitrile and filtered using the polyvinylidene fluoride (PVDF) filter before it flowed through a SB-C18 chromatographic column at 1 mL/min. The encapsulation efficiency and loading capacity of the drug were calculated according to and , respectively. The weight of drug in micelles was derived from the curcumin concentration in the micelle dispersions, while the weight of the initial drug included free curcumin that was not encapsulated and later removed before the HPLC determination. (1) Encapsulation efficiency = weight of drug in micelles weight of the initial drug × 100% (2) Loading capacity = weight of drug in micelles weight of micelles and drug × 100% 2.4. Synthesis of mPEG-PLA-OH Mono-hydroxyl-terminated mPEG-PLA (mPEG-PLA-OH) was synthesized by a ring-opening polymerization of d , l -lactide in the presence of Sn(OCt) 2 . Briefly, mPEG2000 (15 g, 7.5 mmol) was added in a Schlenk bottle and degassed at 130 °C under reduced pressure with magnetic stirring for 2 h to eliminate the residue water. d , l -Lactide (5 g, 34.7 mmol) and Sn(Oct) 2 (5 mg, 12.3 nmol) in anhydrous dichloromethane were added into the bottle in a glove-box. Then, the bottle was purged with nitrogen and degassed at high vacuum for 2 h at room temperature to remove the solvent. Then, the bottle was sealed and maintained at 130 °C under stirring for 15 h. The synthesized mPEG-PLA-OH with a number molecular weight of 500–2000 was recovered by dissolving in dichloromethane followed by precipitation in cold ether. The resultant precipitates were filtered and dried under vacuum at room temperature for 24 h. 2.5. Synthesis of mPEG-PLA-Ac mPEG-PLA-OH (15 g, 6 mmol), TEA (2.50 mL, 18.0 mmol), and 4-py (267 mg, 1.80 mmol) were dissolved in 150 mL of dichloromethane in an oven dried flask. Then, 5.85 g of Ac-DMPA (18.0 mmol) was added and the mixture was stirred at room temperature for 6 h. After the completion of the reaction, the solvent was evaporated and the residue was recrystallized from cold ethanol for three times. After filtration and drying in vacuum at room temperature for 24 h, acetonide-2,2-dimethylol propanoic acid terminated mPEG-PLA (mPEG-PLA-Ac) was obtained as a white solid. 2.6. Synthesis of mPEG-PLA-(OH) 2 mPEG-PLA-Ac (15.0 g, 3.74 mmol) was dissolved in 50 mL of methanol and three tea spoons of Dowex H + resin was added. The mixture was stirred at room temperature for 24 h. After filtration, the polymer solution was precipitated in cold ether and the resultant precipitates (mPEG-PLA-(OH) 2 ) were collected after filtration and dried under vacuum at room temperature for 24 h. 2.7. Synthesis of mPEG-PLA-(Ac) 2 mPEG-PLA-(OH) 2 (10.0 g, 3.80 mmol), TEA (3.16 mL, 22.8 mmol), and 4-py (340 mg, 2.28 mmol) were dissolved in 100 mL of dichloromethane in an oven dried flask. Then, 7.42 g of Ac-DMPA (22.8 mmol) was added and the mixture was stirred at room temperature for 24 h. After the completion of the reaction, the solvent was evaporated and the residue was recrystallized from cold ethanol for three times. After filtration and drying in vacuum at room temperature for 24 h, mPEG-PLA-(Ac) 2 was obtained as a white solid. 2.8. Synthesis of mPEG-PLA-(OH) 4 Ten grams of mPEG-PLA-(Ac) 2 (3.36 mmol) was dissolved in 100 mL of methanol and four tea spoons of Dowex H + resin was added. The mixture was stirred at room temperature for 24 h. After filtration, the polymer solution was precipitated in cold ether and the resultant precipitates mPEG-PLA-(OH) 4 were collected after filtration and drying under vacuum at room temperature for 24 h. 2.9. Synthesis of mPEG-PLA-(LA) 4 Lipoic acid (5.70 g, 27.6 mmol) and TEA (3.82 mL, 27.6 mmol) were dissolved in 50 mL of anhydrous ethyl acetate at −10 °C. Then, 3.5 mL of pivaloyl chloride (27.6 mmol) was slowly added. A white precipitate appeared immediately. The mixture was stirred at 0 °C for 2 h and then at room temperature for 1 h. The insoluble TEA–HCl was filtered off, the solvent was evaporated, and the residue was dried in vacuum for 1 h. The obtained viscous yellow oil was dissolved in 50 mL of anhydrous dichloromethane and cannulated into a chilled solution of mPEG-PLA-(OH) 4 (6.7 g, 2.3 mmol), TEA (4.0 mL, 27.7 mmol), and 4-py (410 mg, 2.77 mmol) in 70 mL of dichloromethane. The solution was stirred at room temperature for 24 h. After the completion of the reaction, the solvent was evaporated and the residue was recrystallized from cold ethanol for three times. The precipitated mPEG-PLA-(LA) 4 was collected by filtration and dried in vacuum at room temperature for 24 h. 2.10. Preparation and stability evaluation of Cur/DCMs Curcumin-loaded noncrosslinked micelles (Cur/NCMs) were prepared by a solid dispersion and thin film hydration method. Typically, 20 mg of curcumin and 380 mg of mPEG-PLA-(LA) 4 were dissolved in 5 mL ethyl acetate at 40 °C. The solvent was slowly evaporated under vacuum to form a thin film followed by hydration of the film with 5 mL of ultrapure water. The micelle dispersion was filtered using a PVDF filter (0.22 μm) to remove curcumin that was not encapsulated and such a formulation typically contained ∼5% curcumin and ∼95% mPEG-PLA-(LA) 4 . Curcumin loaded DCMs (Cur/DCMs) were obtained by the ring-opening polymerization of disulfide-containing lipoyl units using DTT as the catalyst, as reported previously (Noda et al., ; Gong et al., ). Briefly, 1 mL of Tris–HCl buffer (50 mM, pH 8.5) was added to the above Cur/NCMs dispersion. After vacuum and purging nitrogen into the bottle, 13.2 mg DTT (85.6 µmol, 20 mol% relative to the lipoyl units) in 1 mL of water was added and the mixture was stirred at room temperature for 1 h, and then dialyzed against water for 6 h using a dialysis bag (MWCO 8000–14,000). The water was refreshed every one hour. Blank NCMs and DCMs were also prepared following the same protocol. After lyophilization, molecular weight of the micelles before and after crosslinking was characterized by GPC. The stability of Cur/NCMs and Cur/DCMs in saline, 50% ethanol and 50% sodium dodecyl sulfate (SDS) (2.5 mg/mL) was also monitored using DLS. 2.11. Triggered release of curcumin in vitro In order to evaluate the crosslinking effect, in vitro release profiles of curcumin from DMSO (s-Cur), Cur/NCMs, and Cur/DCMs were studied by dynamic dialysis method with 20% SDS solution as release medium. Lyophilized micelles were reconstituted and diluted to 1 mg/mL of curcumin with PBS (0.1 M, pH 7.4) then placed 1 mL into dialysis tube (MWCO 8000–14,000 Da). The tubes were dialyzed against 50 mL release medium at 37 ± 0.5 °C at a stirring speed of 100 rpm with or without 10 mM GSH. At predetermined time intervals, 1 mL of release medium was withdrawn and replenished with an equal volume of the fresh medium. The amount of curcumin released was determined by HPLC analysis as mentioned above. Cumulative release of curcumin was then calculated. The in vitro release studies were carried out in triplicate. 2.12. In vitro cytotoxicity assay The cytotoxicity of s-Cur, Cur/NCMs, and Cur/DCMs against MC-38 cells was determined by a CCK-8 assay. Briefly, MC-38 cells were cultured in RPMI1640 medium with 10% FBS, penicillin (100 U/mL), and streptomycin (100 µg/mL) for 24 h. Then, the cells were seeded in 96-well plates (Corning, NY) at 10,000 cells/well in 100 µL medium. After incubation for 24 h, the cells were exposed to s-Cur and equivalent amount of Cur/NCMs and Cur/DCMs to yield final curcumin concentrations from 0 to 65 µg/mL. For GSH triggered release experiments, adherent cells were incubated with 10 mM glutathione monoethyl ester (GSH-OEt) for 2 h before exposure to the curcumin formulations. After incubation for 72 h, 10 μL CCK8 solution with 100 μL growth medium was added into each well and the cells were incubated for another 1 h. Then, the culture medium was added with 10 μL 1% SDS (dissolve 0.1 g SDS with PBS to obtain 10 mL solution) to stop the reaction, and reading on a Synergy HTX multi-mode reader (BioTek, Winooski, VT) at 450 nm. The cytotoxicity assay was performed in triplicate and the cell viability was calculated with the following equation: (3) Cell viability = A t − A B A c − A B × 100 % where A t is the absorption value of samples, A c is the absorption value of the control group, and A B is the absorption value of the blank group. The viability of NCM460 and MC-38 cells incubated with various concentrations of blank NCMs and DCMs for 72 h was also evaluated. 2.13. Hemolysis assay Human whole blood samples were collected from a volunteer in an ethylenediaminetetraacetic acid (EDTA) precoated tube. The authors assert that all procedures comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. Written informed consent was obtained from the volunteer before blood draw in this study. Five milliliters whole blood was transferred into a tube with 10 mL calcium- and magnesium-free Dulbecco’s phosphate buffered saline (PBS, Grand Island, NY) and centrifuged at 500× g for 10 min to isolate RBCs. This purification step was repeated five times, and then the washed RBCs were diluted with PBS to 50 mL. To test the hemolytic activity of NCMs and DCMs, 0.2 mL of diluted RBC suspension (∼4.5 × 10 8 cells/mL) was mixed with 0.8 mL of NCMs or DCMs suspension in PBS. The final concentration of NCMs and DCMs ranges from 0.001 to 1000 μg/mL. D.I. water (+RBCs) and PBS (+RBCs) were used as the positive control and negative control, respectively. All samples were placed on a rocking shaker in an incubator at 37 °C for 3 h. After incubation, the samples were centrifuged at 10,016× g for 3 min. The hemoglobin absorption in the supernatant was measured at 540 nm, with 655 nm as a reference, using a Synergy HTX multi-mode reader. Percent hemolysis was calculated with the following equation: (4) Percent hemolysis% = A s − A n c A p c − A n c × 100 where A s , A nc , and A pc are the absorption values of the samples, the negative control, and the positive control, respectively. 2.14. Western blot assay MC38 cells were lysed using whole protein extraction kit (KeyGEN BioTECH, Nanjing, China, KGP250). Protein concentration was determined using bicinchoninic acid (BCA) protein assay (KeyGEN BioTECH, Nanjing, China, KGPBCA). Protein extracts were electrophoresed in 10% SDS-PAGE and transferred to a nitrocellulose filter (NC) membrane, blocked by 5% skim milk at room temperature for 1 h, which was prepared in Tris-buffered saline with tween 20 (TBST). Then, the membranes were immunoblotted by primary antibodies. For detection, HRP-conjugated secondary antibodies (Ray Antibody Biotech, Beijing, China, RM2001L) and chemiluminescent HRP substrate kit (EpiZyme, Shanghai, China, SQ2O2) were used. As the molecular weight of the target bands is similar, the western blot fast stripping buffer (EpiZyme, Shanghai, China, PS107) was used. The following primary antibodies were used: anti-p-MEK1/2 (Cell Signaling Technology (CST), 41G9, Boston, MA), anti-p-Erk1/2 (CST, 197G2). The blots were quantified using ImageJ (National Institutes of Health, Bethesda, MD) and normalized to the control group. 2.15. Pharmacokinetics Nine SD rats were randomly divided into three groups (three rats for each group), receiving s-Cur, Cur/NCMs, and Cur/DCMs. Following the intravenous (i.v.) administration of the three curcumin formulations (20 mg/kg), 0.1 mL of the blood samples was collected from the tail vein at predetermined time intervals. The blood samples were centrifuged for 6 min at 5867× g to obtain the plasma. Then, the curcumin concentrations in the plasma were analyzed using a liquid chromatography mass spectrometer (LCMS-2020) equipped with a Shimadzu UV-visible spectrophotometer (Columbia, MD). The detection limitation of the equipment was 1 ng/mL. We employed the non-compartmental analysis in WinNonlin software V6.2.1 to calculate major pharmacokinetic parameters. 2.16. In vivo antitumor efficacy The therapeutic efficacy of saline, Cur/DCMs, PD-1, and the combination therapy of Cur/DCMs and PD-1 was examined on C57BL/6 mice bearing MC-38 colon tumor. MC-38 cells were suspended in RPMI1640 culture medium containing 10% FBS. Suspension of 3 × 10 5 cells in 100 μL medium was injected subcutaneously into animal armpits. Once the mass of the tumor in the xenografts reached ∼70 mm 3 , the mice were divided into four random groups (six animals per group) for receiving physiological saline (100 µL every day), Cur/DCMs (i.v., 40 mg/kg every day), PD-1 (i.v., 10 mg/kg every week), and combination therapy of Cur/DCMs (i.v., 40 mg/kg every day) and PD-1 (i.v., 10 mg/kg every week). After the initial treatment, the mice were continuously monitored for 21 days in terms of body weight and tumor dimensions (length and width). According to the animal welfare, once the tumor volume exceeds 2000 mm 3 , the mice were euthanized. Tumor volume was calculated according to the following equation: (5) V = 1 2 length × width 2 . 2.17. Statistical analysis All values presented in this work were the average of at least three independent experiments unless otherwise stated, and the error bars represent the standard deviations. The difference between any two treatment groups was determined using one-way ANOVA, followed by Tukey’s post hoc or nonparametric test (SPSS version 17.0, Chicago, IL). p < 0.05 indicated statistical significance.
Materials Curcumin was supplied by Soochow Nanomedicine Company (Soochow, China) with a purity of >99.5%. Methoxy poly(ethylene glycol) (mPEG, M n = 2000 g/mol, PDI = 1.03), d , l -lactide, stannous octoate (Sn(Oct) 2 ), l -glutathione (GSH), glutathione monoethyl ester (GSH-OEt), and dithiothreitol (DTT) were purchased from Sigma-Aldrich (Milwaukee, WI). Acetonide-2,2-dimethylolpropanoic anhydride (Ac-DMPA, 1) was synthesized according to a previous report (Gillies & Frechet, ). Dowex H + resin (200–400 mesh), 4-pyrrolidinopyridine (4-py), and pivaloyl chloride were purchased from Acros (Beijing, China). Triethylamine (TEA), ethyl acetate, ether, anhydrous dichloromethane and ethanol were purchased from Shanghai Titan Scientific Co., Ltd. (Shanghai, China). Anti-PD-1 antibody (PD-1) was purchased from Wuxi AppTec Co. Ltd. (Shanghai, China). All other chemicals were of analytical grade from Sinopharm Chemical Regent Co. Ltd. (Shanghai, China) and used without further purification.
Cells and animals Normal human colon epithelial cell line (NCM460) was purchased from INCELL Corporation (San Antonio, TX) and cultured in M3 media (San Antonio, TX) supplemented with 10% fetal bovine serum (FBS), 1% antibiotics (100 U/mL penicillin and 100 µg/mL streptomycin) (Burlington, Canada) at 37 °C in a humidified 5% CO 2 atmosphere. Murine colon adenocarcinoma cells (MC-38) were obtained from American Type Culture Collection (ATCC, Manassas, VA) and cultured in Roswell Park Memorial Institute (RPMI) 1640 medium (Invitrogen GmbH, Karlsruhe, Germany) containing 10% FBS and 1% antibiotics at 37 °C in a humidified 5% CO 2 atmosphere. Male and female evenly Sprague-Dawley rats (SD rats) with specific pathogen free (SPF) grade (250 ± 5 g) and C57BL/6 mice (21 ± 2 g, 4–6 weeks old) were purchased from Vital River (Beijing, China). All animals were housed in a room maintained at 23 ± 3 °C, 65–75% humidity, with a controlled 12 h light–dark cycle for 5–7 days before experiments. Water and commercial laboratory complete food for animals were available. All animal procedures were conducted following the protocol approved by the Institutional Animal Care and Use Committee of Mabspace Biosciences Co. (Soochow, China).
Characterization The molecular weight and polydispersity of the synthesized polymers were determined by a Waters 1515 gel permeation chromatographic (GPC) instrument equipped with a differential refractive-index detector. The measurements were performed using tetrahydrofuran (THF) as the eluent at a flow rate of 1.0 mL/min at 30 °C and polystyrene standards for the calibration of the columns. 1 H nuclear magnetic resonance ( 1 H NMR) spectra were obtained in deuterated chloroform (CDCl 3 ) for validating the chemical structure and calculating the functionality of terminal hydroxyl groups in mPEG-PLA-(OH) 4 with lipoic acid using a Bruker NMR spectrometer (AVANCE III, 500 MHz, Billerica, MA) at 25 °C. The morphologies of the curcumin-loaded micelles before and after crosslinking were examined using a JEM-2100 transmission electron microscope (TEM, JEOL, Tokyo, Japan). The thermal properties of the polymers were characterized on a differential scanning calorimeter (DSC, DSC-SP, Rheometric Scientific, Piscataway, NJ) through a heating cycle from 20 to 100 °C under nitrogen atmosphere at 10 °C/min and the curves were recorded for the second run. The mean diameter and size distribution of the micelles were determined using dynamic light scattering (DLS, Zetasizer Nano-ZS, Malvern Instruments, Malvern, UK). The measurements were performed in triplicate. The curcumin concentrations in the micelle dispersions were quantified at 25 °C using Agilent 1260 high-performance liquid chromatography (HPLC) (Agilent Technologies, Santa Clara, CA). The wavelength of the detector was 425 nm. The eluent was a mixture of acetonitrile/water (75/25) and the micelle dispersion was diluted with acetonitrile and filtered using the polyvinylidene fluoride (PVDF) filter before it flowed through a SB-C18 chromatographic column at 1 mL/min. The encapsulation efficiency and loading capacity of the drug were calculated according to and , respectively. The weight of drug in micelles was derived from the curcumin concentration in the micelle dispersions, while the weight of the initial drug included free curcumin that was not encapsulated and later removed before the HPLC determination. (1) Encapsulation efficiency = weight of drug in micelles weight of the initial drug × 100% (2) Loading capacity = weight of drug in micelles weight of micelles and drug × 100%
Synthesis of mPEG-PLA-OH Mono-hydroxyl-terminated mPEG-PLA (mPEG-PLA-OH) was synthesized by a ring-opening polymerization of d , l -lactide in the presence of Sn(OCt) 2 . Briefly, mPEG2000 (15 g, 7.5 mmol) was added in a Schlenk bottle and degassed at 130 °C under reduced pressure with magnetic stirring for 2 h to eliminate the residue water. d , l -Lactide (5 g, 34.7 mmol) and Sn(Oct) 2 (5 mg, 12.3 nmol) in anhydrous dichloromethane were added into the bottle in a glove-box. Then, the bottle was purged with nitrogen and degassed at high vacuum for 2 h at room temperature to remove the solvent. Then, the bottle was sealed and maintained at 130 °C under stirring for 15 h. The synthesized mPEG-PLA-OH with a number molecular weight of 500–2000 was recovered by dissolving in dichloromethane followed by precipitation in cold ether. The resultant precipitates were filtered and dried under vacuum at room temperature for 24 h.
Synthesis of mPEG-PLA-Ac mPEG-PLA-OH (15 g, 6 mmol), TEA (2.50 mL, 18.0 mmol), and 4-py (267 mg, 1.80 mmol) were dissolved in 150 mL of dichloromethane in an oven dried flask. Then, 5.85 g of Ac-DMPA (18.0 mmol) was added and the mixture was stirred at room temperature for 6 h. After the completion of the reaction, the solvent was evaporated and the residue was recrystallized from cold ethanol for three times. After filtration and drying in vacuum at room temperature for 24 h, acetonide-2,2-dimethylol propanoic acid terminated mPEG-PLA (mPEG-PLA-Ac) was obtained as a white solid.
Synthesis of mPEG-PLA-(OH) 2 mPEG-PLA-Ac (15.0 g, 3.74 mmol) was dissolved in 50 mL of methanol and three tea spoons of Dowex H + resin was added. The mixture was stirred at room temperature for 24 h. After filtration, the polymer solution was precipitated in cold ether and the resultant precipitates (mPEG-PLA-(OH) 2 ) were collected after filtration and dried under vacuum at room temperature for 24 h.
Synthesis of mPEG-PLA-(Ac) 2 mPEG-PLA-(OH) 2 (10.0 g, 3.80 mmol), TEA (3.16 mL, 22.8 mmol), and 4-py (340 mg, 2.28 mmol) were dissolved in 100 mL of dichloromethane in an oven dried flask. Then, 7.42 g of Ac-DMPA (22.8 mmol) was added and the mixture was stirred at room temperature for 24 h. After the completion of the reaction, the solvent was evaporated and the residue was recrystallized from cold ethanol for three times. After filtration and drying in vacuum at room temperature for 24 h, mPEG-PLA-(Ac) 2 was obtained as a white solid.
Synthesis of mPEG-PLA-(OH) 4 Ten grams of mPEG-PLA-(Ac) 2 (3.36 mmol) was dissolved in 100 mL of methanol and four tea spoons of Dowex H + resin was added. The mixture was stirred at room temperature for 24 h. After filtration, the polymer solution was precipitated in cold ether and the resultant precipitates mPEG-PLA-(OH) 4 were collected after filtration and drying under vacuum at room temperature for 24 h. 2.9. Synthesis of mPEG-PLA-(LA) 4 Lipoic acid (5.70 g, 27.6 mmol) and TEA (3.82 mL, 27.6 mmol) were dissolved in 50 mL of anhydrous ethyl acetate at −10 °C. Then, 3.5 mL of pivaloyl chloride (27.6 mmol) was slowly added. A white precipitate appeared immediately. The mixture was stirred at 0 °C for 2 h and then at room temperature for 1 h. The insoluble TEA–HCl was filtered off, the solvent was evaporated, and the residue was dried in vacuum for 1 h. The obtained viscous yellow oil was dissolved in 50 mL of anhydrous dichloromethane and cannulated into a chilled solution of mPEG-PLA-(OH) 4 (6.7 g, 2.3 mmol), TEA (4.0 mL, 27.7 mmol), and 4-py (410 mg, 2.77 mmol) in 70 mL of dichloromethane. The solution was stirred at room temperature for 24 h. After the completion of the reaction, the solvent was evaporated and the residue was recrystallized from cold ethanol for three times. The precipitated mPEG-PLA-(LA) 4 was collected by filtration and dried in vacuum at room temperature for 24 h.
Synthesis of mPEG-PLA-(LA) 4 Lipoic acid (5.70 g, 27.6 mmol) and TEA (3.82 mL, 27.6 mmol) were dissolved in 50 mL of anhydrous ethyl acetate at −10 °C. Then, 3.5 mL of pivaloyl chloride (27.6 mmol) was slowly added. A white precipitate appeared immediately. The mixture was stirred at 0 °C for 2 h and then at room temperature for 1 h. The insoluble TEA–HCl was filtered off, the solvent was evaporated, and the residue was dried in vacuum for 1 h. The obtained viscous yellow oil was dissolved in 50 mL of anhydrous dichloromethane and cannulated into a chilled solution of mPEG-PLA-(OH) 4 (6.7 g, 2.3 mmol), TEA (4.0 mL, 27.7 mmol), and 4-py (410 mg, 2.77 mmol) in 70 mL of dichloromethane. The solution was stirred at room temperature for 24 h. After the completion of the reaction, the solvent was evaporated and the residue was recrystallized from cold ethanol for three times. The precipitated mPEG-PLA-(LA) 4 was collected by filtration and dried in vacuum at room temperature for 24 h.
Preparation and stability evaluation of Cur/DCMs Curcumin-loaded noncrosslinked micelles (Cur/NCMs) were prepared by a solid dispersion and thin film hydration method. Typically, 20 mg of curcumin and 380 mg of mPEG-PLA-(LA) 4 were dissolved in 5 mL ethyl acetate at 40 °C. The solvent was slowly evaporated under vacuum to form a thin film followed by hydration of the film with 5 mL of ultrapure water. The micelle dispersion was filtered using a PVDF filter (0.22 μm) to remove curcumin that was not encapsulated and such a formulation typically contained ∼5% curcumin and ∼95% mPEG-PLA-(LA) 4 . Curcumin loaded DCMs (Cur/DCMs) were obtained by the ring-opening polymerization of disulfide-containing lipoyl units using DTT as the catalyst, as reported previously (Noda et al., ; Gong et al., ). Briefly, 1 mL of Tris–HCl buffer (50 mM, pH 8.5) was added to the above Cur/NCMs dispersion. After vacuum and purging nitrogen into the bottle, 13.2 mg DTT (85.6 µmol, 20 mol% relative to the lipoyl units) in 1 mL of water was added and the mixture was stirred at room temperature for 1 h, and then dialyzed against water for 6 h using a dialysis bag (MWCO 8000–14,000). The water was refreshed every one hour. Blank NCMs and DCMs were also prepared following the same protocol. After lyophilization, molecular weight of the micelles before and after crosslinking was characterized by GPC. The stability of Cur/NCMs and Cur/DCMs in saline, 50% ethanol and 50% sodium dodecyl sulfate (SDS) (2.5 mg/mL) was also monitored using DLS.
Triggered release of curcumin in vitro In order to evaluate the crosslinking effect, in vitro release profiles of curcumin from DMSO (s-Cur), Cur/NCMs, and Cur/DCMs were studied by dynamic dialysis method with 20% SDS solution as release medium. Lyophilized micelles were reconstituted and diluted to 1 mg/mL of curcumin with PBS (0.1 M, pH 7.4) then placed 1 mL into dialysis tube (MWCO 8000–14,000 Da). The tubes were dialyzed against 50 mL release medium at 37 ± 0.5 °C at a stirring speed of 100 rpm with or without 10 mM GSH. At predetermined time intervals, 1 mL of release medium was withdrawn and replenished with an equal volume of the fresh medium. The amount of curcumin released was determined by HPLC analysis as mentioned above. Cumulative release of curcumin was then calculated. The in vitro release studies were carried out in triplicate.
In vitro cytotoxicity assay The cytotoxicity of s-Cur, Cur/NCMs, and Cur/DCMs against MC-38 cells was determined by a CCK-8 assay. Briefly, MC-38 cells were cultured in RPMI1640 medium with 10% FBS, penicillin (100 U/mL), and streptomycin (100 µg/mL) for 24 h. Then, the cells were seeded in 96-well plates (Corning, NY) at 10,000 cells/well in 100 µL medium. After incubation for 24 h, the cells were exposed to s-Cur and equivalent amount of Cur/NCMs and Cur/DCMs to yield final curcumin concentrations from 0 to 65 µg/mL. For GSH triggered release experiments, adherent cells were incubated with 10 mM glutathione monoethyl ester (GSH-OEt) for 2 h before exposure to the curcumin formulations. After incubation for 72 h, 10 μL CCK8 solution with 100 μL growth medium was added into each well and the cells were incubated for another 1 h. Then, the culture medium was added with 10 μL 1% SDS (dissolve 0.1 g SDS with PBS to obtain 10 mL solution) to stop the reaction, and reading on a Synergy HTX multi-mode reader (BioTek, Winooski, VT) at 450 nm. The cytotoxicity assay was performed in triplicate and the cell viability was calculated with the following equation: (3) Cell viability = A t − A B A c − A B × 100 % where A t is the absorption value of samples, A c is the absorption value of the control group, and A B is the absorption value of the blank group. The viability of NCM460 and MC-38 cells incubated with various concentrations of blank NCMs and DCMs for 72 h was also evaluated.
Hemolysis assay Human whole blood samples were collected from a volunteer in an ethylenediaminetetraacetic acid (EDTA) precoated tube. The authors assert that all procedures comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. Written informed consent was obtained from the volunteer before blood draw in this study. Five milliliters whole blood was transferred into a tube with 10 mL calcium- and magnesium-free Dulbecco’s phosphate buffered saline (PBS, Grand Island, NY) and centrifuged at 500× g for 10 min to isolate RBCs. This purification step was repeated five times, and then the washed RBCs were diluted with PBS to 50 mL. To test the hemolytic activity of NCMs and DCMs, 0.2 mL of diluted RBC suspension (∼4.5 × 10 8 cells/mL) was mixed with 0.8 mL of NCMs or DCMs suspension in PBS. The final concentration of NCMs and DCMs ranges from 0.001 to 1000 μg/mL. D.I. water (+RBCs) and PBS (+RBCs) were used as the positive control and negative control, respectively. All samples were placed on a rocking shaker in an incubator at 37 °C for 3 h. After incubation, the samples were centrifuged at 10,016× g for 3 min. The hemoglobin absorption in the supernatant was measured at 540 nm, with 655 nm as a reference, using a Synergy HTX multi-mode reader. Percent hemolysis was calculated with the following equation: (4) Percent hemolysis% = A s − A n c A p c − A n c × 100 where A s , A nc , and A pc are the absorption values of the samples, the negative control, and the positive control, respectively.
Western blot assay MC38 cells were lysed using whole protein extraction kit (KeyGEN BioTECH, Nanjing, China, KGP250). Protein concentration was determined using bicinchoninic acid (BCA) protein assay (KeyGEN BioTECH, Nanjing, China, KGPBCA). Protein extracts were electrophoresed in 10% SDS-PAGE and transferred to a nitrocellulose filter (NC) membrane, blocked by 5% skim milk at room temperature for 1 h, which was prepared in Tris-buffered saline with tween 20 (TBST). Then, the membranes were immunoblotted by primary antibodies. For detection, HRP-conjugated secondary antibodies (Ray Antibody Biotech, Beijing, China, RM2001L) and chemiluminescent HRP substrate kit (EpiZyme, Shanghai, China, SQ2O2) were used. As the molecular weight of the target bands is similar, the western blot fast stripping buffer (EpiZyme, Shanghai, China, PS107) was used. The following primary antibodies were used: anti-p-MEK1/2 (Cell Signaling Technology (CST), 41G9, Boston, MA), anti-p-Erk1/2 (CST, 197G2). The blots were quantified using ImageJ (National Institutes of Health, Bethesda, MD) and normalized to the control group.
Pharmacokinetics Nine SD rats were randomly divided into three groups (three rats for each group), receiving s-Cur, Cur/NCMs, and Cur/DCMs. Following the intravenous (i.v.) administration of the three curcumin formulations (20 mg/kg), 0.1 mL of the blood samples was collected from the tail vein at predetermined time intervals. The blood samples were centrifuged for 6 min at 5867× g to obtain the plasma. Then, the curcumin concentrations in the plasma were analyzed using a liquid chromatography mass spectrometer (LCMS-2020) equipped with a Shimadzu UV-visible spectrophotometer (Columbia, MD). The detection limitation of the equipment was 1 ng/mL. We employed the non-compartmental analysis in WinNonlin software V6.2.1 to calculate major pharmacokinetic parameters.
In vivo antitumor efficacy The therapeutic efficacy of saline, Cur/DCMs, PD-1, and the combination therapy of Cur/DCMs and PD-1 was examined on C57BL/6 mice bearing MC-38 colon tumor. MC-38 cells were suspended in RPMI1640 culture medium containing 10% FBS. Suspension of 3 × 10 5 cells in 100 μL medium was injected subcutaneously into animal armpits. Once the mass of the tumor in the xenografts reached ∼70 mm 3 , the mice were divided into four random groups (six animals per group) for receiving physiological saline (100 µL every day), Cur/DCMs (i.v., 40 mg/kg every day), PD-1 (i.v., 10 mg/kg every week), and combination therapy of Cur/DCMs (i.v., 40 mg/kg every day) and PD-1 (i.v., 10 mg/kg every week). After the initial treatment, the mice were continuously monitored for 21 days in terms of body weight and tumor dimensions (length and width). According to the animal welfare, once the tumor volume exceeds 2000 mm 3 , the mice were euthanized. Tumor volume was calculated according to the following equation: (5) V = 1 2 length × width 2 .
Statistical analysis All values presented in this work were the average of at least three independent experiments unless otherwise stated, and the error bars represent the standard deviations. The difference between any two treatment groups was determined using one-way ANOVA, followed by Tukey’s post hoc or nonparametric test (SPSS version 17.0, Chicago, IL). p < 0.05 indicated statistical significance.
Results and discussion 3.1. Synthesis and characterization of mPEG-PLA-(LA) 4 The synthetic method was controlled precisely in every step, which led to a well-defined structure of the resulting dendritic lipoic acid functionalized block copolymer. The detailed synthetic route is illustrated in . The first step was to synthesize hydroxyl-terminated mPEG-PLA-OH block copolymer via a ring-opening polymerization of d , l -lactide initiated by mPEG2000 and catalyzed by stannous octoate (Sn(Oct) 2 ). After precipitation in cold ether, mPEG-PLA-OH was obtained as white powder up to a ∼90% yield. The molecular weight of the synthesized mPEG-PLA-OH was calculated from the 1 H NMR spectrum . The M n of the mPEG block was 2000 g/mol and the molecular weight of the PLA block was 500 g/mol with a polydispersity index (PDI) of 1.04, as characterized by GPC analysis . The second coupling reaction of terminal hydroxyl groups in mPEG-PLA-OH with second generation acetonide-terminated polyester dendrons based on 2,2-bis(hydroxymethyl) propionic acid was achieved with a simple divergent growth approach (Gillies & Frechet, ). In the present study, 4-py was used as the condensing agent, which was found to be quite efficient and easy to be removed by simple re-crystallization from ethanol. The complete end-capping of terminal hydroxyl groups by polyester dendrons was confirmed by the 1 H NMR spectrum and the GPC analysis . The average end-capping ratio of mPEG-PLA-OH with polyester dendrons was determined by comparing the integration ratio between the signal a , which was assigned to the terminal methyl group in mPEG unit at 3.47 ppm, and the signal h , which was assigned to the methylene groups in the dendron unit with the theoretical value of 3:8 for 100% end-capping. In all cases, the ratios larger than 90% indicated total conversation of the end structure. The third step was to synthesize the lipoic acid-terminated telodendrimer shown as mPEG-PLA-(LA) 4 in . Although the carboxyl groups in lipoic acid can also react with the hydroxyl end groups in mPEG-PLA-(OH) 4 in the presence of acylating catalyst such as dicyclohexylcarbodiimide (DCC) (Gotsche et al., ), this direct coupling reaction was barely possible because of the low reactivity of the terminal hydroxyl groups in polymer chains. In the present study, an anhydride mixture of lipoic acid and pivaloyl chloride was used as the acylating agent to convert the terminal hydroxyl groups of mPEG-PLA-(OH) 4 into the lipoic acid structure, which was found to be a powerful acylating reagent yielding complete conversion while it was not detrimental to the polymer backbone, as evidenced in the 1 H NMR spectrum (Noda et al., ; Fan et al., ). After the coupling reaction, new peaks at 3.10–3.20, 2.44–2.49, 2.30–2.35, and 1.89–1.94 ppm, attributed to the lipoic acid, are shown in , indicating the formation of the expected macromolecule. The end-capping efficiency was also calculated to be about 100% by comparing the integration ratio between the signal of the terminal methyl group in mPEG at 3.47 ppm and the signals of i , k , and o in the lipoyl unit. As described above, the terminal hydroxyl groups in mPEG-PLA-(OH) 4 were completely capped by the mixed anhydride. The GPC profiles in also confirmed that the end-capping reaction of mPEG-PLA-(OH) 4 resulted in a slight increase of about 600 g/mol in the polymer molecular weight, while the PDI remained constant. Altogether, our results clearly recapitulate the complete end-capping of the terminal hydroxyl groups in mPEG-PLA-(OH) 4 with lipoic acid without changing the backbone of the block copolymer. DSC thermograms revealed endothermic peaks for mPEG-PLA-OH, mPEG-PLA-Ac, mPEG-PLA-(OH) 2 , mPEG-PLA-(Ac) 2 , mPEG-PLA-(OH) 4 , and mPEG-PLA-(LA) 4 . All scans were run up to 100 °C and no thermal changes above 60 °C were observed for all the samples. All the end-functionalized polymers exhibited single exothermal peak below 51 °C, the melting point of mPEG-PLA-OH. The endothermic peaks for the block copolymers between 36 °C and 51 °C were probably due to the melting of mPEG region in the copolymers, indicating the formation of separated phases. The conjugation of amorphous dendritic structure with semi-crystalline mPEG-PLA-OH indeed suppressed the crystallinity of the block copolymer. The reduced melting temperatures of mPEG region in the block copolymers compared with pure mPEG indicated a lower degree of crystallinity in the copolymers. As the molecular weight of the terminal dendritic structure improved, the melting temperature of the copolymers decreased. It is possible that the dendritic structure interfered with the crystallization of the mPEG block resulting in an imperfect crystal. Deprotection of the terminal acetonide groups results in a decrease of the melting point ( T m ) while the molecular weight of the copolymers also changed . 3.2. Micelle formation The first aim of this study was to develop blood stable and reduction sensitive reversibly disulfide crosslinked polymeric micelles for curcumin delivery, for which terminal dendronized mPEG-PLA and lipoic acid conjugates (mPEG-PLA-(LA) 4 ) were designed and prepared. Cur loading in non-crosslinked micelles (Cur/NCMs) were prepared by a solid dispersion-thin film hydration method which can be easily scaled up (Gong et al., ). In this process, amphiphilic mPEG-PLA-(LA) 4 self-assembled into spherical micelles (NCMs) with an average diameter of ∼25.4 nm and a PDI of 0.239, measured by DLS. mPEG-PLA-(LA) 4 exhibited a low critical micelle concentration (CMC) of 12 mg/L, as determined by fluorescence measurements using pyrene as a probe (data not shown). Loading of curcumin into NCMs was performed at a curcumin concentration of 3 mg/mL and a theoretical drug loading content (DLC) of 5.0 wt%. The loading efficiency was nearly up to 100% and the final particle sizes fell in the range of ∼27.3 nm and a PDI of 0.237 prior to crosslinking. Cur/NCMs could be readily crosslinked using a catalytic amount (20 mol% relative to the lipoyl units) of DTT to initiate the ring opening polymerization of lipoyl rings to form a linear polydisulfide (Meng et al., ). After crosslinking, no significant change in curcumin loading capacity was observed, while the molecular weight of the polymer composing the micelles was significantly increased from 3982 g/mol to 107,000 g/mol with a polydispersity of 1.0–1.1 . DLS measurements revealed that the size of Cur/DCMs decreased to ∼24.6 nm while PDI remained low of 0.225 after crosslinking. TEM images demonstrated that both Cur/NCMs and Cur/DCMs exhibited a spherical morphology and a size distribution close to that determined by DLS . 3.3. In vitro stability In order to investigate whether the intra-micellar disulfide crosslinking can enhance the stability of curcumin loaded micelles against severe micelle-disrupting conditions, the average diameters and PDI of Cur/NCMs and Cur/DCMs upon ethanol dilution and SDS condition were monitored by laser particle size analyzer. It is well known that SDS was able to efficiently solubilize the amphiphilic block copolymers at high concentrations, resulting in destabilization of the polymeric micelles (Li et al., ), and ethanol can also solubilize the mPEG-PLA-(LA) 4 block copolymer. As shown in Fig. S1, Cur/DCMs exhibited superior colloidal stability against 50% ethanol and 50% SDS extensive dilution. After each Cur/DCM dispersion (3.0 mg/mL) was mixed with the same volume of SDS aqueous solution (2.5 mg/mL) for 10 min, the particle size was recorded. The PDI of the Cur/DCMs showed a slight increase, while the mean diameter was almost unchanged. The constant particle size of the Cur/DCMs under similar condition over time indicated that such crosslinked micelles remained intact. In contrast, abundant small and large aggregates appeared in the Cur/NCMs dispersion accompanying the significantly increased size distribution of the original micelle particles, indicating that NCMs were dissociated into unimers in SDS solution. The presence of ethanol resulted in large aggregates of NCMs, but barely influenced on Cur/DCMs. Cur/NCMs and Cur/DCMs were dispersed in PBS buffer solution and maintained at room temperature. At day 28, precipitation was observed in Cur/NCMs, the precipitation was more obvious at day 42, while no precipitation was observed in Cur/DCMs over the investigated time range . Following the evolution of the particle size of Cur/NCMs and Cur/DCMs, the dimension of Cur/NCMs remained constant for at least 14 days, thereafter the particle size increases remarkably. The particle size of Cur/DCMs is stable for at least 6 weeks . These results indicated that the core-crosslinked structure conferred excellent colloidal stability. 3.4. In vitro release profile The drug release profiles of free curcumin (s-Cur), Cur/NCMs, and Cur/DCMs were measured using the dialysis method . Curcumin released from s-Cur and Cur/NCMs were rapid. About 60% and 32% of curcumin were released from s-Cur and Cur/NCMs, respectively, within the first 8 h. At 24 h, 75% and 65% of curcumin in s-Cur and Cur/NCMs were released, whereas only 22% of curcumin was released from Cur/DCMs and the slow drug release was sustained for more than one week (data not shown), indicating that the disulfide crosslinked micelle (DCM) core significantly improved the stability of the micelles. Furthermore, as predicted, the release rate of Cur/DCMs was significantly accelerated as the GSH concentration was similar to the intracellular level (10 mM). This drug release behavior induced by 10 mM GSH can be exploited to achieve minimized premature drug release during circulation in vivo , but triggered release upon internalization of the micelles into cancer cells. 3.5. In vitro biocompatibility In vitro hemolytic activity of NCMs and DCMs was evaluated with human red blood cells (RBCs). A universal method according to the literature for testing in vitro nanoparticle hemolysis was carried out (Dobrovoiskaia et al., ). With the increasing concentrations of both NCMs and DCMs up to 1000 μg/mL, the membrane of RBCs remained intact , and hardly any free hemoglobin was detected in the supernatant (Fig. S2), indicating good biocompatibility with the blood. The in vitro cytotoxicity of NCMs, DCMs, s-Cur, Cur/NCMs, and Cur/DCMs against MC-38 cancer cells was evaluated by a CCK8 assay. As shown in Fig. S3, blank NCMs and DCMs did not show detectable cytotoxicity even at a high polymer concentration of 1 mg/mL against tumor cells (MC-38) and the normal human colon mucosal epithelial cell line (NCM460) while s-Cur, Cur/NCMs, and Cur/DCMs exhibited dose-dependent cytotoxicity to MC-38 cells with the IC 50 values of 13.7, 18.04, and 33.3 µg/mL, respectively . Cur/DCMs showed much lower cytotoxicity than the solvent based formulation and the non-crosslinked formulation, most likely resulted from the much slower release rate of curcumin from the core crosslinked micelles. To estimate the GSH sensitivity of the core crosslinked micelles, the in vitro anticancer activity was also performed in MC-38 cells with an uplifted GSH level, which is well-known to break down the disulfide crosslinkage. GSH itself is not able to be effectively uptaken by cells owing to its anionic nature. Reports confirmed that GSH-OEt, a neutralized form of GSH, can penetrate cellular membranes and rapidly reach a high concentration of GSH through ethyl ester hydrolyzation in cytoplasm (Koo et al., ). In the present study, cells were pretreated with 10 mM GSH-OEt before incubation with different curcumin formulations to modulate the intracellular GSH concentration. As presented in , cell viability treated with Cur/DCMs significantly decreased ( p < 0.05) with 10 mM GSH-OEt pretreatment, while no significant difference of cell viability was observed for the Cur/NCMs treated group before and after GSH-OEt pretreatment. The IC 50 values of Cur/NCMs + GSH-OEt and Cur/DCMs + GSH-OEt were 14.7 and 21.8 μg/mL, respectively. Considering the negligible cytotoxicity of blank NCMs and DCMs, the above cell growth inhibition was ascribed to the accelerated curcumin release from the core crosslinked micelles by the increased intracellular GSH concentration, which triggered the de-crosslinking of disulfide linkage in the micelle core. Curcumin is known to play a role in anti-proliferation via modulating the MAPK signaling (Binion et al., ; Yallapu et al., ; Hsiao et al., ). We have further investigated whether the encapsulated curcumin function for mediating the anti-proliferation was through down-regulating p-MEK1/2 and p-ERK1/2 , and the corresponding quantification is shown in Fig. S4. All groups containing curcumin (s-Cur, Cur/NCMs, Cur/DCMs, Cur/NCMs + GSH, Cur/DCMs + GSH) showed significantly low expression of the proliferative marker proteins. In particular, the triggered release of curcumin from DCMs exhibited the most intensified inhibition of cell proliferation. 3.6. Pharmacokinetics Free curcumin in plasma was known to subject to rapid elimination and metabolism by the liver (Garcea et al., ); therefore, the in vivo stability is quite important in the carrier design for curcumin delivery. The present disulfide crosslinked formulation Cur/DCMs exhibited excellent stability that could protect the curcumin payload in the micelle core and therefore prolong the circulation time and improve the bioavailability after systemic administration. A comparative pharmacokinetic study among free s-Cur, Cur/NCMs, and Cur/DCMs (curcumin 20 mg/kg) after i.v. administration in SD rats was performed. Plasma curcumin concentration–time curves are plotted in . The main pharmacokinetic parameters of the three formulations were calculated using non-compartmental analysis, as listed in . The curcumin levels in SD rats at a single dose of 20 mg/kg from both s-Cur and Cur/NCMs declined rapidly and became lower than 100 ng/mL within 30 min and below the detection limit of 1 ng/mL after 12 h. Although curcumin can be encapsulated into the core of polymeric micelles to be completely dispersible in saline and intravenously injectable, most self-assembled micelles are reported to be dissociated by the blood components and lose their payload right after administration (Savic et al., ; Chen et al., ; Letchford & Burt, ). In the present study, no significant difference was observed in the pharmacokinetic behavior between s-Cur and Cur/NCMs, indicating that the bioavailability of curcumin was not improved by NCMs encapsulation. A distinct result was observed in the group treated with Cur/DCMs. The enhanced stability of curcumin by the core crosslinked micelles resulted in a significant increase in plasma curcumin concentration. Even after 48 h, the curcumin concentration in plasma from the group treated with Cur/DCMs was ∼30 ng/mL, which was equal to the serum concentration upon receiving a daily oral intake of 10 g (Cheng et al., ). For Cur/DCMs, the area under the time–concentration curve (AUC) was 7.55-fold larger, the half-life of elimination ( t 1/2 ) was 8.48-fold longer, the mean residence time (MRT) was 94.22-fold longer, and the maximum concentration in plasma was 1.49-fold higher than that of Cur/NCMs, while the total body clearance (CLz) was significantly decreased, indicating that elimination of curcumin was effectively decreased by core crosslinked micelle encapsulation. Taken together, these results demonstrated that Cur/DCMs markedly improved the stability of curcumin in the blood circulation, which obviously contributed to the improved bioavailability in vivo . 3.7. Antitumor efficacy In vivo antitumor efficacy and systemic toxicity were evaluated on an MC-38 colon cancer xenograft model in C57BL/6 mice to examine the synergistic efficacy of Cur/DCMs with immunotherapy. First, as shown in c), the tumor volume in mice treated with saline grew rapidly, approximately 2600 mm 3 on day 16. However, tumors in mice treated with Cur/DCMs were notably smaller than those in mice treated with saline. Cur/DCMs induced ∼40% of tumor growth retardant compared to the control saline group while anti-PD-1 greatly inhibited the tumor growth ( p < 0.05). Second, enhanced tumor growth inhibition efficacy was observed by simultaneous administration of Cur/DCMs and anti-PD-1, the average tumor volume of which was only 9.6% of that in the only anti-PD-1 treated group. The results demonstrated that strong synergistic efficacy in treating cancer could be obtained through immunotherapeutic agent co-delivery with Cur/DCMs. More importantly, as compared to the anti-PD-1 and Cur/DCMs co-delivery group with tumor recurrence of 0%, the tumor recurrence of the group treated with single anti-PD-1 was 50% (three out of six mice). However, free s-Cur was found not to be able to significantly enhance the antitumor effects of anti-PD-1 in our previous research (data not shown), which indicated that free s-Cur had no synergistic anti-tumor efficacy on immunotherapeutic agent in vivo because of the instability of s-Cur during circulation in the bloodstream. In the present study, blood stable and reduction sensitive Cur/DCMs significantly improved free curcumin concentration in cancer cells through the EPR effect and selective burst release mechanism, which effectively reversed the immunosuppressive tumor microenvironment and improved the immunotherapeutic efficacy of anti-PD-1. The synergistic index is a key reference to evaluate the synergistic effect. Synergy refers to two or more components mixing together, and the effect is more remarkable than the sum of the effects deriving from individual components applied alone. We have tentatively calculated the synergistic index of the anti-PD-1 antibody combined with Cur/DCMs according to the reported method (Finney, ; Huang et al., ), and the value is 1.18. As the selected dosage was limited in our setting, the more accurate synergistic index resulting from additional experimental data will be carried out in our future experiments. Additionally, all the treatments were well-tolerated at the tested dosage and no apparent side effects, including body weight loss , were observed in any group during the experiment.
Synthesis and characterization of mPEG-PLA-(LA) 4 The synthetic method was controlled precisely in every step, which led to a well-defined structure of the resulting dendritic lipoic acid functionalized block copolymer. The detailed synthetic route is illustrated in . The first step was to synthesize hydroxyl-terminated mPEG-PLA-OH block copolymer via a ring-opening polymerization of d , l -lactide initiated by mPEG2000 and catalyzed by stannous octoate (Sn(Oct) 2 ). After precipitation in cold ether, mPEG-PLA-OH was obtained as white powder up to a ∼90% yield. The molecular weight of the synthesized mPEG-PLA-OH was calculated from the 1 H NMR spectrum . The M n of the mPEG block was 2000 g/mol and the molecular weight of the PLA block was 500 g/mol with a polydispersity index (PDI) of 1.04, as characterized by GPC analysis . The second coupling reaction of terminal hydroxyl groups in mPEG-PLA-OH with second generation acetonide-terminated polyester dendrons based on 2,2-bis(hydroxymethyl) propionic acid was achieved with a simple divergent growth approach (Gillies & Frechet, ). In the present study, 4-py was used as the condensing agent, which was found to be quite efficient and easy to be removed by simple re-crystallization from ethanol. The complete end-capping of terminal hydroxyl groups by polyester dendrons was confirmed by the 1 H NMR spectrum and the GPC analysis . The average end-capping ratio of mPEG-PLA-OH with polyester dendrons was determined by comparing the integration ratio between the signal a , which was assigned to the terminal methyl group in mPEG unit at 3.47 ppm, and the signal h , which was assigned to the methylene groups in the dendron unit with the theoretical value of 3:8 for 100% end-capping. In all cases, the ratios larger than 90% indicated total conversation of the end structure. The third step was to synthesize the lipoic acid-terminated telodendrimer shown as mPEG-PLA-(LA) 4 in . Although the carboxyl groups in lipoic acid can also react with the hydroxyl end groups in mPEG-PLA-(OH) 4 in the presence of acylating catalyst such as dicyclohexylcarbodiimide (DCC) (Gotsche et al., ), this direct coupling reaction was barely possible because of the low reactivity of the terminal hydroxyl groups in polymer chains. In the present study, an anhydride mixture of lipoic acid and pivaloyl chloride was used as the acylating agent to convert the terminal hydroxyl groups of mPEG-PLA-(OH) 4 into the lipoic acid structure, which was found to be a powerful acylating reagent yielding complete conversion while it was not detrimental to the polymer backbone, as evidenced in the 1 H NMR spectrum (Noda et al., ; Fan et al., ). After the coupling reaction, new peaks at 3.10–3.20, 2.44–2.49, 2.30–2.35, and 1.89–1.94 ppm, attributed to the lipoic acid, are shown in , indicating the formation of the expected macromolecule. The end-capping efficiency was also calculated to be about 100% by comparing the integration ratio between the signal of the terminal methyl group in mPEG at 3.47 ppm and the signals of i , k , and o in the lipoyl unit. As described above, the terminal hydroxyl groups in mPEG-PLA-(OH) 4 were completely capped by the mixed anhydride. The GPC profiles in also confirmed that the end-capping reaction of mPEG-PLA-(OH) 4 resulted in a slight increase of about 600 g/mol in the polymer molecular weight, while the PDI remained constant. Altogether, our results clearly recapitulate the complete end-capping of the terminal hydroxyl groups in mPEG-PLA-(OH) 4 with lipoic acid without changing the backbone of the block copolymer. DSC thermograms revealed endothermic peaks for mPEG-PLA-OH, mPEG-PLA-Ac, mPEG-PLA-(OH) 2 , mPEG-PLA-(Ac) 2 , mPEG-PLA-(OH) 4 , and mPEG-PLA-(LA) 4 . All scans were run up to 100 °C and no thermal changes above 60 °C were observed for all the samples. All the end-functionalized polymers exhibited single exothermal peak below 51 °C, the melting point of mPEG-PLA-OH. The endothermic peaks for the block copolymers between 36 °C and 51 °C were probably due to the melting of mPEG region in the copolymers, indicating the formation of separated phases. The conjugation of amorphous dendritic structure with semi-crystalline mPEG-PLA-OH indeed suppressed the crystallinity of the block copolymer. The reduced melting temperatures of mPEG region in the block copolymers compared with pure mPEG indicated a lower degree of crystallinity in the copolymers. As the molecular weight of the terminal dendritic structure improved, the melting temperature of the copolymers decreased. It is possible that the dendritic structure interfered with the crystallization of the mPEG block resulting in an imperfect crystal. Deprotection of the terminal acetonide groups results in a decrease of the melting point ( T m ) while the molecular weight of the copolymers also changed .
Micelle formation The first aim of this study was to develop blood stable and reduction sensitive reversibly disulfide crosslinked polymeric micelles for curcumin delivery, for which terminal dendronized mPEG-PLA and lipoic acid conjugates (mPEG-PLA-(LA) 4 ) were designed and prepared. Cur loading in non-crosslinked micelles (Cur/NCMs) were prepared by a solid dispersion-thin film hydration method which can be easily scaled up (Gong et al., ). In this process, amphiphilic mPEG-PLA-(LA) 4 self-assembled into spherical micelles (NCMs) with an average diameter of ∼25.4 nm and a PDI of 0.239, measured by DLS. mPEG-PLA-(LA) 4 exhibited a low critical micelle concentration (CMC) of 12 mg/L, as determined by fluorescence measurements using pyrene as a probe (data not shown). Loading of curcumin into NCMs was performed at a curcumin concentration of 3 mg/mL and a theoretical drug loading content (DLC) of 5.0 wt%. The loading efficiency was nearly up to 100% and the final particle sizes fell in the range of ∼27.3 nm and a PDI of 0.237 prior to crosslinking. Cur/NCMs could be readily crosslinked using a catalytic amount (20 mol% relative to the lipoyl units) of DTT to initiate the ring opening polymerization of lipoyl rings to form a linear polydisulfide (Meng et al., ). After crosslinking, no significant change in curcumin loading capacity was observed, while the molecular weight of the polymer composing the micelles was significantly increased from 3982 g/mol to 107,000 g/mol with a polydispersity of 1.0–1.1 . DLS measurements revealed that the size of Cur/DCMs decreased to ∼24.6 nm while PDI remained low of 0.225 after crosslinking. TEM images demonstrated that both Cur/NCMs and Cur/DCMs exhibited a spherical morphology and a size distribution close to that determined by DLS .
In vitro stability In order to investigate whether the intra-micellar disulfide crosslinking can enhance the stability of curcumin loaded micelles against severe micelle-disrupting conditions, the average diameters and PDI of Cur/NCMs and Cur/DCMs upon ethanol dilution and SDS condition were monitored by laser particle size analyzer. It is well known that SDS was able to efficiently solubilize the amphiphilic block copolymers at high concentrations, resulting in destabilization of the polymeric micelles (Li et al., ), and ethanol can also solubilize the mPEG-PLA-(LA) 4 block copolymer. As shown in Fig. S1, Cur/DCMs exhibited superior colloidal stability against 50% ethanol and 50% SDS extensive dilution. After each Cur/DCM dispersion (3.0 mg/mL) was mixed with the same volume of SDS aqueous solution (2.5 mg/mL) for 10 min, the particle size was recorded. The PDI of the Cur/DCMs showed a slight increase, while the mean diameter was almost unchanged. The constant particle size of the Cur/DCMs under similar condition over time indicated that such crosslinked micelles remained intact. In contrast, abundant small and large aggregates appeared in the Cur/NCMs dispersion accompanying the significantly increased size distribution of the original micelle particles, indicating that NCMs were dissociated into unimers in SDS solution. The presence of ethanol resulted in large aggregates of NCMs, but barely influenced on Cur/DCMs. Cur/NCMs and Cur/DCMs were dispersed in PBS buffer solution and maintained at room temperature. At day 28, precipitation was observed in Cur/NCMs, the precipitation was more obvious at day 42, while no precipitation was observed in Cur/DCMs over the investigated time range . Following the evolution of the particle size of Cur/NCMs and Cur/DCMs, the dimension of Cur/NCMs remained constant for at least 14 days, thereafter the particle size increases remarkably. The particle size of Cur/DCMs is stable for at least 6 weeks . These results indicated that the core-crosslinked structure conferred excellent colloidal stability.
In vitro release profile The drug release profiles of free curcumin (s-Cur), Cur/NCMs, and Cur/DCMs were measured using the dialysis method . Curcumin released from s-Cur and Cur/NCMs were rapid. About 60% and 32% of curcumin were released from s-Cur and Cur/NCMs, respectively, within the first 8 h. At 24 h, 75% and 65% of curcumin in s-Cur and Cur/NCMs were released, whereas only 22% of curcumin was released from Cur/DCMs and the slow drug release was sustained for more than one week (data not shown), indicating that the disulfide crosslinked micelle (DCM) core significantly improved the stability of the micelles. Furthermore, as predicted, the release rate of Cur/DCMs was significantly accelerated as the GSH concentration was similar to the intracellular level (10 mM). This drug release behavior induced by 10 mM GSH can be exploited to achieve minimized premature drug release during circulation in vivo , but triggered release upon internalization of the micelles into cancer cells.
In vitro biocompatibility In vitro hemolytic activity of NCMs and DCMs was evaluated with human red blood cells (RBCs). A universal method according to the literature for testing in vitro nanoparticle hemolysis was carried out (Dobrovoiskaia et al., ). With the increasing concentrations of both NCMs and DCMs up to 1000 μg/mL, the membrane of RBCs remained intact , and hardly any free hemoglobin was detected in the supernatant (Fig. S2), indicating good biocompatibility with the blood. The in vitro cytotoxicity of NCMs, DCMs, s-Cur, Cur/NCMs, and Cur/DCMs against MC-38 cancer cells was evaluated by a CCK8 assay. As shown in Fig. S3, blank NCMs and DCMs did not show detectable cytotoxicity even at a high polymer concentration of 1 mg/mL against tumor cells (MC-38) and the normal human colon mucosal epithelial cell line (NCM460) while s-Cur, Cur/NCMs, and Cur/DCMs exhibited dose-dependent cytotoxicity to MC-38 cells with the IC 50 values of 13.7, 18.04, and 33.3 µg/mL, respectively . Cur/DCMs showed much lower cytotoxicity than the solvent based formulation and the non-crosslinked formulation, most likely resulted from the much slower release rate of curcumin from the core crosslinked micelles. To estimate the GSH sensitivity of the core crosslinked micelles, the in vitro anticancer activity was also performed in MC-38 cells with an uplifted GSH level, which is well-known to break down the disulfide crosslinkage. GSH itself is not able to be effectively uptaken by cells owing to its anionic nature. Reports confirmed that GSH-OEt, a neutralized form of GSH, can penetrate cellular membranes and rapidly reach a high concentration of GSH through ethyl ester hydrolyzation in cytoplasm (Koo et al., ). In the present study, cells were pretreated with 10 mM GSH-OEt before incubation with different curcumin formulations to modulate the intracellular GSH concentration. As presented in , cell viability treated with Cur/DCMs significantly decreased ( p < 0.05) with 10 mM GSH-OEt pretreatment, while no significant difference of cell viability was observed for the Cur/NCMs treated group before and after GSH-OEt pretreatment. The IC 50 values of Cur/NCMs + GSH-OEt and Cur/DCMs + GSH-OEt were 14.7 and 21.8 μg/mL, respectively. Considering the negligible cytotoxicity of blank NCMs and DCMs, the above cell growth inhibition was ascribed to the accelerated curcumin release from the core crosslinked micelles by the increased intracellular GSH concentration, which triggered the de-crosslinking of disulfide linkage in the micelle core. Curcumin is known to play a role in anti-proliferation via modulating the MAPK signaling (Binion et al., ; Yallapu et al., ; Hsiao et al., ). We have further investigated whether the encapsulated curcumin function for mediating the anti-proliferation was through down-regulating p-MEK1/2 and p-ERK1/2 , and the corresponding quantification is shown in Fig. S4. All groups containing curcumin (s-Cur, Cur/NCMs, Cur/DCMs, Cur/NCMs + GSH, Cur/DCMs + GSH) showed significantly low expression of the proliferative marker proteins. In particular, the triggered release of curcumin from DCMs exhibited the most intensified inhibition of cell proliferation.
Pharmacokinetics Free curcumin in plasma was known to subject to rapid elimination and metabolism by the liver (Garcea et al., ); therefore, the in vivo stability is quite important in the carrier design for curcumin delivery. The present disulfide crosslinked formulation Cur/DCMs exhibited excellent stability that could protect the curcumin payload in the micelle core and therefore prolong the circulation time and improve the bioavailability after systemic administration. A comparative pharmacokinetic study among free s-Cur, Cur/NCMs, and Cur/DCMs (curcumin 20 mg/kg) after i.v. administration in SD rats was performed. Plasma curcumin concentration–time curves are plotted in . The main pharmacokinetic parameters of the three formulations were calculated using non-compartmental analysis, as listed in . The curcumin levels in SD rats at a single dose of 20 mg/kg from both s-Cur and Cur/NCMs declined rapidly and became lower than 100 ng/mL within 30 min and below the detection limit of 1 ng/mL after 12 h. Although curcumin can be encapsulated into the core of polymeric micelles to be completely dispersible in saline and intravenously injectable, most self-assembled micelles are reported to be dissociated by the blood components and lose their payload right after administration (Savic et al., ; Chen et al., ; Letchford & Burt, ). In the present study, no significant difference was observed in the pharmacokinetic behavior between s-Cur and Cur/NCMs, indicating that the bioavailability of curcumin was not improved by NCMs encapsulation. A distinct result was observed in the group treated with Cur/DCMs. The enhanced stability of curcumin by the core crosslinked micelles resulted in a significant increase in plasma curcumin concentration. Even after 48 h, the curcumin concentration in plasma from the group treated with Cur/DCMs was ∼30 ng/mL, which was equal to the serum concentration upon receiving a daily oral intake of 10 g (Cheng et al., ). For Cur/DCMs, the area under the time–concentration curve (AUC) was 7.55-fold larger, the half-life of elimination ( t 1/2 ) was 8.48-fold longer, the mean residence time (MRT) was 94.22-fold longer, and the maximum concentration in plasma was 1.49-fold higher than that of Cur/NCMs, while the total body clearance (CLz) was significantly decreased, indicating that elimination of curcumin was effectively decreased by core crosslinked micelle encapsulation. Taken together, these results demonstrated that Cur/DCMs markedly improved the stability of curcumin in the blood circulation, which obviously contributed to the improved bioavailability in vivo .
Antitumor efficacy In vivo antitumor efficacy and systemic toxicity were evaluated on an MC-38 colon cancer xenograft model in C57BL/6 mice to examine the synergistic efficacy of Cur/DCMs with immunotherapy. First, as shown in c), the tumor volume in mice treated with saline grew rapidly, approximately 2600 mm 3 on day 16. However, tumors in mice treated with Cur/DCMs were notably smaller than those in mice treated with saline. Cur/DCMs induced ∼40% of tumor growth retardant compared to the control saline group while anti-PD-1 greatly inhibited the tumor growth ( p < 0.05). Second, enhanced tumor growth inhibition efficacy was observed by simultaneous administration of Cur/DCMs and anti-PD-1, the average tumor volume of which was only 9.6% of that in the only anti-PD-1 treated group. The results demonstrated that strong synergistic efficacy in treating cancer could be obtained through immunotherapeutic agent co-delivery with Cur/DCMs. More importantly, as compared to the anti-PD-1 and Cur/DCMs co-delivery group with tumor recurrence of 0%, the tumor recurrence of the group treated with single anti-PD-1 was 50% (three out of six mice). However, free s-Cur was found not to be able to significantly enhance the antitumor effects of anti-PD-1 in our previous research (data not shown), which indicated that free s-Cur had no synergistic anti-tumor efficacy on immunotherapeutic agent in vivo because of the instability of s-Cur during circulation in the bloodstream. In the present study, blood stable and reduction sensitive Cur/DCMs significantly improved free curcumin concentration in cancer cells through the EPR effect and selective burst release mechanism, which effectively reversed the immunosuppressive tumor microenvironment and improved the immunotherapeutic efficacy of anti-PD-1. The synergistic index is a key reference to evaluate the synergistic effect. Synergy refers to two or more components mixing together, and the effect is more remarkable than the sum of the effects deriving from individual components applied alone. We have tentatively calculated the synergistic index of the anti-PD-1 antibody combined with Cur/DCMs according to the reported method (Finney, ; Huang et al., ), and the value is 1.18. As the selected dosage was limited in our setting, the more accurate synergistic index resulting from additional experimental data will be carried out in our future experiments. Additionally, all the treatments were well-tolerated at the tested dosage and no apparent side effects, including body weight loss , were observed in any group during the experiment.
Conclusions We have designed and synthesized a telodendrimer (mPEG-PLA-(LA) 4 ) capable of forming reversibly DCMs for in vivo curcumin delivery, and the in vitro cytotoxicity, pharmacokinetics, and antitumor efficacy with anti-PD-1 against MC-38 colon cancer have been investigated. The DCMs stably retained curcumin in the bloodstream and efficiently improved the systemic bioavailability, with a 7.55-fold larger of the AUC, 8.48-fold longer of the half-life of elimination ( t 1/2 ), 1.49-fold higher of the maximum concentration in plasma, and 94.22-fold longer of the MRT, as compared with the NCMs. Results in the antitumor setting further confirmed the synergistic anticancer efficacy of Cur/DCMs in combination with anti-PD-1 in treating MC-38 colon cancer. Therefore, our micellar formulation is expected to provide a feasible and efficacious way for delivering curcumin to reinforce the immunotherapy in treating cancers.
Supplemental Material Click here for additional data file.
|
Exploring the expression of DLL3 in gastroenteropancreatic neuroendocrine neoplasms and its potential diagnostic value | 10c4e767-ced7-4521-92cc-b904db3ec933 | 11770191 | Anatomy[mh] | Gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) are highly heterogeneous tumors that constituting approximately 65% of all neuroendocrine neoplasms and rank as the second most common gastrointestinal cancer , . Improved imaging techniques and enhanced awareness have contributed to an increased incidence from 1.1/100,000 in 1973 to 6.9/100,000 in 2012 . According to the 2019 WHO grading system, GEP-NENs are categorized as neuroendocrine tumors (NETs), neuroendocrine carcinomas (NECs), or mixed neuroendocrine-non-neuroendocrine neoplasms (MiNENs) based on histological differentiation, mitotic count, and the Ki-67 proliferation index. Well-differentiated NETs are further stratified into Grade1 (G1), Grade2 (G2), and Grade3 (G3) subtypes, whereas NECs with poorly differentiated NECs are classified as small cell NEC (SCNEC) and large cell NEC (LCNEC) . NECs are often non-functional, highly aggressive, and frequently diagnosed late with distant metastasis, resulting in dismal 5-year survival rates below 5% . Owing to the advanced stage and poor prognosis of GEP-NECs, treatments primarily aim to extend survival and enhance quality of life. Within the neuroendocrine spectrum, GEP-NECs share similar molecular and transcriptional profiles with small cell lung carcinoma (SCLC). Therefore, GEP-NECs treatment was similar to that for SCLC. Systemic chemotherapy, typically platinum-based regimens such as etoposide plus cisplatin (EC) and irinotecan plus cisplatin (IC), is the first-line of treatment for metastatic GEP-NECs. Notably, a Ki-67 index > 55% reliably predicts the reactivity of platinum-based chemotherapy . Although some studies have suggested efficacy for agents such as 5-fluorouracil, irinotecan, and oxaliplatin, there are still no standardized second-line regimens. Ongoing trials, such as the one using 5-fluorouracil + leucovorin + irinotecan offer promise (NCT03387592) – . Advances in molecular detection technology and basic experiments have led researchers to investigate the molecular pathways and interactions in neuroendocrine neoplasms, offering potential targets for GEP-NECs. For instance, sunitinib, a multitarget tyrosine kinase inhibitor primarily inhibiting vascular endothelial growth factor receptor and platelet-derived growth factor receptor, elicited positive responses in 11 of 20 patients with GEP-NECs in a phase II trial encompassing 6 NET G3 and 20 NEC cases . Phase II and III clinical trials testing single- or multi-drug combinations targeting various pathways, including XPO1 (NCT02250885), PARP (NCT04209595), and HDAC (NCT05076786), are currently ongoing for GEP-NECs. Among them, the negative regulator of the Notch pathway, DLL3, has drawn significant attention as a potential therapeutic target for neuroendocrine neoplasms , . The Notch pathway, known for its highly conserved signaling cascade, is a critical factor in cellular transformation, including cell proliferation, epithelial-mesenchymal transition, neuroendocrine cell differentiation, chemoresistance, and immune microenvironment modulation . As an inhibitory ligand of the Notch pathway , DLL3 interacts with various Notch receptors (Notch1-4) to facilitate malignant transformation. DLL3 expression is absent or minimal in normal cells and is predominantly localized within the Golgi apparatus and cytoplasmic vesicles. Conversely, in malignant NENs, DLL3 translocates to the cell membrane surface, inhibiting the CIS notch pathway , . ASCL1 , an intrinsic transcription factor in normal cells, regulates DLL3 expression, guiding neuroendocrine cell differentiation and initiating SCLC . ASCL1 activation increases DLL3 expression, enhancing the inhibition of Notch1 signaling . Both the inhibition of Notch1 and the upregulation of ASCL1 contribute to NEN development , . The efficacy of DLL3-targeted therapy has been confirmed in SCLC. Rova-T, a classic antibody-drug conjugate (ADC) in a phase I clinical trial (NCT01901653), has shown efficacy against recurrent or refractory SCLC . However, severe adverse reactions have halted further clinical investigations of Rova-T. Next-generation ADCs aim to enhance tumor cell uptake by refining drug linkers to mitigate toxicity and optimize pharmacokinetics for improved clinical utility , . Tarlatamab, a bispecific T cell engager (TCE) targeting DLL3 on tumor cells and CD3 on T cells, has demonstrated promising results in a phase I trial, with a 13% objective response rate (ORR) and 71% of patients with SCLC experiencing relief for ≥ 6 months . Other candidate drugs, such as HPN328 and AMG 119, have also shown beneficial antitumor responses in clinical and preclinical stages , . Patients with DLL3-expressing SCLC or NEC are currently enrolled in the first human Phase I trial of BI 764,532 (TCE) (NCT04429087). Although DLL3-targeted therapies are designed for tumors expressing DLL3, the immunohistochemical positivity for DLL3 expression on tumor cells was not a required criterion for patient enrollment in the trials. Therefore, DLL3 immunohistochemistry does not have a predictive role in determining eligibility for the trials. Although DLL3 expression was initially identified in SCLC, in vitro investigations have revealed its diverse oncogenic role; elevated expression promotes aggressive behavior through Snail overexpression , . Moreover, DLL3 is highly expressed not only in SCLC but also in lung LCNEC. In a retrospective study of pulmonary LCNEC, over 74% (70/94) of patients expressed DLL3 . DLL3 expression extends beyond lung cancer to various invasive malignancies, including prostate cancer, bladder SCNEC, malignant melanoma, glioblastoma, and medullary thyroid carcinoma – , suggesting its potential as a biomarker for neuroendocrine-origin malignancies. However, DLL3 expression in GEP-NECs is poorly understood. This study aimed to investigate DLL3 expression in GEP-NECs and analyze its clinicopathological correlation and the relationship between DLL3 expression and patient prognosis. Patient selection All available information in this retrospective study was sourced from the Peking University Cancer Hospital. Basic patient information spanning 2010 to 2023, including age, sex, primary tumor site, histological classification, TNM stage, chemotherapy prior to baseline, and some aspects of prognosis, were primarily collected from medical records. Telephone follow-up was specifically used to obtain detailed prognostic information, which may not have been fully documented in the medical records or required updates beyond what was originally recorded. TNM staging refer to the American Joint Committee on Cancer (AJCC) 8th edition .After re-examination by two experienced pathologists, it was ensured that each case met the WHO definition of neuroendocrine tumors: the shape was organ-like, trabecular and palisade-like, the mitotic count and Ki67 index strictly adhered to the standards of different grades, and expressed at least one neuroendocrine marker (Syn, CgA, CD56). Cases that did not meet these conditions were excluded from the study. Cases that lacked sufficient tissue samples for further staining were also excluded. The study protocol received approval from the Medical Ethics Committee of Peking University Cancer Hospital (approval number 2023KT29.), and all patients provided informed consent before tissue sample utilization. Pathological material The sample composition of this experiment is summarized in Supplementary Fig. 1. We obtained 248 tumor tissue samples from primary GEP-NECs (including surgery and biopsy samples), 19 distant metastatic GEP-NECs samples (all were liver metastases) and 9 lymph node metastases. Additionally, 36 GEP-NETs (8 G1, 9 G2, 19 G3) samples and 29 GACs samples were also collected. Each sample underwent formalin fixation and paraffin embedding (FFPE). Some cases had immunohistochemical staining results for three neuroendocrine markers, Ki67, and PD-L1, when included in the study. For cases lacking any markers, additional immunohistochemical staining was carried out on available tumor tissues. The assessment of neuroendocrine marker results follows the 5th edition WHO criteria , the Ki67 index was evaluated as a percentage, and the expression of PD-L1 was evaluated by CPS (22C3, Agilent DaKo, Denmark, 1:50). PD-L1 expression was defined as the ratio of the number of immune-related cells (tumor cells, lymphocytes, macrophages) expressing PD-L1 to the number of all tumor cells, with CPS ≥ 1 defined as positive. Immunohistochemical staining FFPE specimens were sectioned into 4-µm-thick slices using a rotary microtome, followed by incubation in EDTA solution (pH = 8.4) at 95 °C for 36 min. A DLL3 antibody (clone SP347, Roche, ready-to-use) was used for staining with Ventana, whereas ASCL1 antibody (24B72D11.1, BD Biosciences, 1:100) was stained with Leica Bond III. The color reaction was achieved using diaminobenzidine (ZSGB-BIO, Beijing, China). Staining outcomes were evaluated using an optical microscope with the 20× or 40× magnification. Each slide was reviewed by two experienced pathologists. Only when both pathologists agree on the result was it adopted, in cases of disagreement, the specimens were re-evaluated and discussed to achieve a consensus. DLL3 and ASCL1 results were binarily categorized. For DLL3, positive staining was defined as a reaction in ≥ 1% of tumor cells, regardless of intensity, based on criteria established by other studies – . Any punctate, cytoplasmic and/or membranous staining was considered positive, as previously described , . DLL3-high was characterized by ≥ 50% positive tumor cells and ≥ 75% positive tumor cells . For ASCL1, tumor cell positivity was categorized as 0, 1, 2, or 3 (negative, faint, moderate, or strong, respectively). We adopted semi-quantitative algorithms with H-score, which was calculated by multiplying the percentage of positive tumor cells by the corresponding staining intensity. An H-score ≥ 50 is defined as positive . Statistical analysis We conducted all analyses using SPSS software (IBM Corp., Armonk, NY, USA, version 25). The correlation between DLL3 expression and clinicopathological features was evaluated using the chi-squared test or Fisher’s exact test. Median progression-free survival (PFS) and overall survival (OS) were determined using Kaplan–Meier analysis, and survival differences were assessed using log-rank tests based on DLL3 expression status. The statistical significance level was set at p < 0.05. All available information in this retrospective study was sourced from the Peking University Cancer Hospital. Basic patient information spanning 2010 to 2023, including age, sex, primary tumor site, histological classification, TNM stage, chemotherapy prior to baseline, and some aspects of prognosis, were primarily collected from medical records. Telephone follow-up was specifically used to obtain detailed prognostic information, which may not have been fully documented in the medical records or required updates beyond what was originally recorded. TNM staging refer to the American Joint Committee on Cancer (AJCC) 8th edition .After re-examination by two experienced pathologists, it was ensured that each case met the WHO definition of neuroendocrine tumors: the shape was organ-like, trabecular and palisade-like, the mitotic count and Ki67 index strictly adhered to the standards of different grades, and expressed at least one neuroendocrine marker (Syn, CgA, CD56). Cases that did not meet these conditions were excluded from the study. Cases that lacked sufficient tissue samples for further staining were also excluded. The study protocol received approval from the Medical Ethics Committee of Peking University Cancer Hospital (approval number 2023KT29.), and all patients provided informed consent before tissue sample utilization. The sample composition of this experiment is summarized in Supplementary Fig. 1. We obtained 248 tumor tissue samples from primary GEP-NECs (including surgery and biopsy samples), 19 distant metastatic GEP-NECs samples (all were liver metastases) and 9 lymph node metastases. Additionally, 36 GEP-NETs (8 G1, 9 G2, 19 G3) samples and 29 GACs samples were also collected. Each sample underwent formalin fixation and paraffin embedding (FFPE). Some cases had immunohistochemical staining results for three neuroendocrine markers, Ki67, and PD-L1, when included in the study. For cases lacking any markers, additional immunohistochemical staining was carried out on available tumor tissues. The assessment of neuroendocrine marker results follows the 5th edition WHO criteria , the Ki67 index was evaluated as a percentage, and the expression of PD-L1 was evaluated by CPS (22C3, Agilent DaKo, Denmark, 1:50). PD-L1 expression was defined as the ratio of the number of immune-related cells (tumor cells, lymphocytes, macrophages) expressing PD-L1 to the number of all tumor cells, with CPS ≥ 1 defined as positive. FFPE specimens were sectioned into 4-µm-thick slices using a rotary microtome, followed by incubation in EDTA solution (pH = 8.4) at 95 °C for 36 min. A DLL3 antibody (clone SP347, Roche, ready-to-use) was used for staining with Ventana, whereas ASCL1 antibody (24B72D11.1, BD Biosciences, 1:100) was stained with Leica Bond III. The color reaction was achieved using diaminobenzidine (ZSGB-BIO, Beijing, China). Staining outcomes were evaluated using an optical microscope with the 20× or 40× magnification. Each slide was reviewed by two experienced pathologists. Only when both pathologists agree on the result was it adopted, in cases of disagreement, the specimens were re-evaluated and discussed to achieve a consensus. DLL3 and ASCL1 results were binarily categorized. For DLL3, positive staining was defined as a reaction in ≥ 1% of tumor cells, regardless of intensity, based on criteria established by other studies – . Any punctate, cytoplasmic and/or membranous staining was considered positive, as previously described , . DLL3-high was characterized by ≥ 50% positive tumor cells and ≥ 75% positive tumor cells . For ASCL1, tumor cell positivity was categorized as 0, 1, 2, or 3 (negative, faint, moderate, or strong, respectively). We adopted semi-quantitative algorithms with H-score, which was calculated by multiplying the percentage of positive tumor cells by the corresponding staining intensity. An H-score ≥ 50 is defined as positive . We conducted all analyses using SPSS software (IBM Corp., Armonk, NY, USA, version 25). The correlation between DLL3 expression and clinicopathological features was evaluated using the chi-squared test or Fisher’s exact test. Median progression-free survival (PFS) and overall survival (OS) were determined using Kaplan–Meier analysis, and survival differences were assessed using log-rank tests based on DLL3 expression status. The statistical significance level was set at p < 0.05. Patient profile Table provides a summary of the baseline characteristics of patients diagnosed with GEP-NECs. Among the 248 primary GEP-NECs cases, the age range was 23 to 85 years, with a median of 62 years. The study included 182 (73.4%) males and 66 (26.6%) females. The total number of SCNEC, LCNEC, and MiNENs cases were 138 (55.7%), 101 (40.7%), and 9 (3.6%), respectively. Gastric NECs accounted for the largest proportion (148, 59.7%) in this cohort, followed by the esophagus (51, 20.6%), and pancreas (24, 9.7%). T1-stage patients accounted for 3.2% (8/248), while T2-T4-stage patients accounted for 43.1% (107/248). Patients with stage N0 and N1-N3 accounted for 11.7% (29/248) and 33.1% (82/248), respectively; 40.7% (101/248) of patients with M stage were M0 stage, and 5.2% (13/248) were M1 stage. Additionally, 64 (25.8%) patients received chemotherapy before baseline. The rates of Ki67 proliferation index in different intervals 25–50%, 51–75%, and 76–100% were 11.7% (29/248), 39.5% (98/248), and 35.1% (87/248), respectively. In 75.4% (187/248) of cases, at least 2 NE markers were positive, while in 12.5% (31/248) of cases < 2 NE markers were positive. A positive reaction of PD-L1 was shown in 6.0% (15/248) of cases. Relationship between DLL3 expression and clinicopathological features DLL3 staining was positive in the cytomembrane, cytoplasm, and punctate, in which cytoplasmic and membranous staining was diffuse, whereas intermittent perinuclear staining was punctate. However, we identified four cases in which nuclear positivity was observed alongside cytoplasmic and membranous staining, a pattern not previously reported in other studies. The nuclear positivity in these cases may represent a nonspecific or incidental reaction. Representative images are presented in Fig. a. Moreover, DLL3 has a heterogeneous expression pattern in GEP-NECs. Correlations between DLL3 expression and clinicopathological features are presented in Table . Positive expression was observed in 68.1% of SCNEC (94/138) compared to 38.6% of LCNEC (39/101) and 33.3% (3/9) of MiNENs ( p < 0.001). Of note, in MiNENs, the neuroendocrine component exhibited positive DLL3 expression, while the adenocarcinoma component showed negative expression (Fig. b). The expression rate of DLL3 was highest in the esophagus (68.6%, 35/51), followed by the colorectum (53.8%, 7/13), stomach (52.7%, 78/148), pancreas (41.7%, 10/24), and small intestine (25.0%, 2/8). DLL3 showed significant expression differences among different T-stage cases (T1 100.0% vs. T2-T4 53.3%, p = 0.009). Patients who underwent chemotherapy prior to baseline demonstrated a higher prevalence of DLL3 expression compared to individuals who did not undergo previous therapy (67.2% vs. 50.5%, p = 0.015). In the group with < 2 NE positive markers, the expression rate of DLL3 was 38.7%, significantly lower than that in the group with ≥ 2 NE positive markers (56.7%, p = 0.048). No associations were observed between DLL3 expression and sex, N stage, M stage, Ki67 index, or PD-L1 expression. DLL3 expression in metastatic tumors The expression of DLL3 in metastatic tumors is shown in Supplementary Fig. 2. The staining pattern of DLL3 in metastatic lesions was similar to that in primary tumors. In lymph node metastasis and distant metastases, the positive rate of DLL3 was 44.4% and 52.6%, respectively. Chi-square analysis showed no significant difference in the expression of DLL3 between primary tumor, lymph node metastases, and distant metastases ( p = 0.818) (Fig. a). Explore the differential diagnosis value of DLL3 To explore the expression of DLL3 in well-differentiated GEP-NETs, we stained 36 cases of GEP-NETs. Among the GEP-NETs cases, none of the eight cases of NET G1 and nine cases of NET G2 exhibited positive DLL3 staining, while 3 out of 19 (15.8%) cases of NET G3 were positive for DLL3 staining. The DLL3 expression was also heterogeneous, and all of them were positive in the cytoplasm and membrane (Fig. c). Since the morphological overlap between NET G3 and NECs or poorly differentiated adenocarcinoma and NECs often puts pathologists in a dilemma with respect to differential diagnosis, we further assessed the value of DLL3 in differential diagnosis. Therefore, we performed DLL3 staining on additional GACs with different degrees of differentiation. The results show that no positive DLL3 staining was detected in 29 GACs cases, regardless of the degree of differentiation (Fig. c). DLL3 may serve as a useful differential diagnostic tool with a sensitivity of 54.8% and a specificity of 84.2% when differentiating NECs from NET G3, and a sensitivity of 54.8% and a specificity of 100.0% when differentiating NECs from GACs, with a cutoff value of 1% positive tumor cells. However, with the threshold raised to 50% positive tumor cells, the sensitivity declined to 31.9%, while the specificity increased to 94.7% when identifying NECs and NET G3 and to 100.0% when identifying NECs and GACs, reflecting the stricter standard (Table ). ASCL1 expression Almost all positive reactions for ASCL1 were located to the nucleus, with only one case showing an unusual punctate positivity around the nucleus (Fig. a). The expression status of 111 samples of ASCL1 and DLL3 is shown in Fig. b. In addition, it was observed that in samples with co-expression of DLL3 and ASCL1, tumor cells expressing ASCL1 also exhibited DLL3 expression, demonstrating a spatial consistency. (Fig. c). Similarly, its expression was also heterogeneous inside the tumor. Overall, ASCL1 expression was detected in 14.4% (16/111) of GEP-NECs cases. Interestingly, DLL3 expression rate differed between the ASCL1 positive and ASCL1 negative groups ( p = 0.002) (Fig. b). In the ASCL1 positive group, the DLL3 positive rate was 87.5% (14/16), whereas 47.4% (45/95) showed positive DLL3 staining in the ASCL1 negative group. Similarly, DLL3 expression was higher in the ASCL1 positive group among GEP-SCNECs (100.0% vs. 56.8%, p = 0.02) (Fig. c). However, the differential expression of DLL3 was not significant in GEP-LCNECs (33.3% vs. 37.8%, p = 0.687) (Fig. d). Follow-up and prognosis The prognosis information of 199 patients was obtained. The median follow-up time was 19.7 months (range: 0.8–140.4 months). Survival analysis was conducted for different cutoff values of DLL3 expression ( N = 199) (Fig. ). In the DLL3 positive group (with the 1% cutoff value), the median PFS and median OS were 12.3 months (95% CI: 7.9–16.6) and 24.4 months (95% CI: 20.2–28.6), respectively, while in the DLL3 negative group, the median PFS and median OS were 13.4 months (95% CI: 10.5–16.3) and 25.9 months (95% CI: 18.9–32.9), respectively. The log-rank test showed that there was no significant difference in the prognosis between the DLL3 negative group and the DLL3 positive group (Fig. a). Moreover, no significant difference in survival was observed between high and low DLL3 expression groups, regardless of whether the cutoff value defined by high DLL3 expression was 50% or 75% (Fig. b and c). Table provides a summary of the baseline characteristics of patients diagnosed with GEP-NECs. Among the 248 primary GEP-NECs cases, the age range was 23 to 85 years, with a median of 62 years. The study included 182 (73.4%) males and 66 (26.6%) females. The total number of SCNEC, LCNEC, and MiNENs cases were 138 (55.7%), 101 (40.7%), and 9 (3.6%), respectively. Gastric NECs accounted for the largest proportion (148, 59.7%) in this cohort, followed by the esophagus (51, 20.6%), and pancreas (24, 9.7%). T1-stage patients accounted for 3.2% (8/248), while T2-T4-stage patients accounted for 43.1% (107/248). Patients with stage N0 and N1-N3 accounted for 11.7% (29/248) and 33.1% (82/248), respectively; 40.7% (101/248) of patients with M stage were M0 stage, and 5.2% (13/248) were M1 stage. Additionally, 64 (25.8%) patients received chemotherapy before baseline. The rates of Ki67 proliferation index in different intervals 25–50%, 51–75%, and 76–100% were 11.7% (29/248), 39.5% (98/248), and 35.1% (87/248), respectively. In 75.4% (187/248) of cases, at least 2 NE markers were positive, while in 12.5% (31/248) of cases < 2 NE markers were positive. A positive reaction of PD-L1 was shown in 6.0% (15/248) of cases. DLL3 staining was positive in the cytomembrane, cytoplasm, and punctate, in which cytoplasmic and membranous staining was diffuse, whereas intermittent perinuclear staining was punctate. However, we identified four cases in which nuclear positivity was observed alongside cytoplasmic and membranous staining, a pattern not previously reported in other studies. The nuclear positivity in these cases may represent a nonspecific or incidental reaction. Representative images are presented in Fig. a. Moreover, DLL3 has a heterogeneous expression pattern in GEP-NECs. Correlations between DLL3 expression and clinicopathological features are presented in Table . Positive expression was observed in 68.1% of SCNEC (94/138) compared to 38.6% of LCNEC (39/101) and 33.3% (3/9) of MiNENs ( p < 0.001). Of note, in MiNENs, the neuroendocrine component exhibited positive DLL3 expression, while the adenocarcinoma component showed negative expression (Fig. b). The expression rate of DLL3 was highest in the esophagus (68.6%, 35/51), followed by the colorectum (53.8%, 7/13), stomach (52.7%, 78/148), pancreas (41.7%, 10/24), and small intestine (25.0%, 2/8). DLL3 showed significant expression differences among different T-stage cases (T1 100.0% vs. T2-T4 53.3%, p = 0.009). Patients who underwent chemotherapy prior to baseline demonstrated a higher prevalence of DLL3 expression compared to individuals who did not undergo previous therapy (67.2% vs. 50.5%, p = 0.015). In the group with < 2 NE positive markers, the expression rate of DLL3 was 38.7%, significantly lower than that in the group with ≥ 2 NE positive markers (56.7%, p = 0.048). No associations were observed between DLL3 expression and sex, N stage, M stage, Ki67 index, or PD-L1 expression. The expression of DLL3 in metastatic tumors is shown in Supplementary Fig. 2. The staining pattern of DLL3 in metastatic lesions was similar to that in primary tumors. In lymph node metastasis and distant metastases, the positive rate of DLL3 was 44.4% and 52.6%, respectively. Chi-square analysis showed no significant difference in the expression of DLL3 between primary tumor, lymph node metastases, and distant metastases ( p = 0.818) (Fig. a). To explore the expression of DLL3 in well-differentiated GEP-NETs, we stained 36 cases of GEP-NETs. Among the GEP-NETs cases, none of the eight cases of NET G1 and nine cases of NET G2 exhibited positive DLL3 staining, while 3 out of 19 (15.8%) cases of NET G3 were positive for DLL3 staining. The DLL3 expression was also heterogeneous, and all of them were positive in the cytoplasm and membrane (Fig. c). Since the morphological overlap between NET G3 and NECs or poorly differentiated adenocarcinoma and NECs often puts pathologists in a dilemma with respect to differential diagnosis, we further assessed the value of DLL3 in differential diagnosis. Therefore, we performed DLL3 staining on additional GACs with different degrees of differentiation. The results show that no positive DLL3 staining was detected in 29 GACs cases, regardless of the degree of differentiation (Fig. c). DLL3 may serve as a useful differential diagnostic tool with a sensitivity of 54.8% and a specificity of 84.2% when differentiating NECs from NET G3, and a sensitivity of 54.8% and a specificity of 100.0% when differentiating NECs from GACs, with a cutoff value of 1% positive tumor cells. However, with the threshold raised to 50% positive tumor cells, the sensitivity declined to 31.9%, while the specificity increased to 94.7% when identifying NECs and NET G3 and to 100.0% when identifying NECs and GACs, reflecting the stricter standard (Table ). Almost all positive reactions for ASCL1 were located to the nucleus, with only one case showing an unusual punctate positivity around the nucleus (Fig. a). The expression status of 111 samples of ASCL1 and DLL3 is shown in Fig. b. In addition, it was observed that in samples with co-expression of DLL3 and ASCL1, tumor cells expressing ASCL1 also exhibited DLL3 expression, demonstrating a spatial consistency. (Fig. c). Similarly, its expression was also heterogeneous inside the tumor. Overall, ASCL1 expression was detected in 14.4% (16/111) of GEP-NECs cases. Interestingly, DLL3 expression rate differed between the ASCL1 positive and ASCL1 negative groups ( p = 0.002) (Fig. b). In the ASCL1 positive group, the DLL3 positive rate was 87.5% (14/16), whereas 47.4% (45/95) showed positive DLL3 staining in the ASCL1 negative group. Similarly, DLL3 expression was higher in the ASCL1 positive group among GEP-SCNECs (100.0% vs. 56.8%, p = 0.02) (Fig. c). However, the differential expression of DLL3 was not significant in GEP-LCNECs (33.3% vs. 37.8%, p = 0.687) (Fig. d). The prognosis information of 199 patients was obtained. The median follow-up time was 19.7 months (range: 0.8–140.4 months). Survival analysis was conducted for different cutoff values of DLL3 expression ( N = 199) (Fig. ). In the DLL3 positive group (with the 1% cutoff value), the median PFS and median OS were 12.3 months (95% CI: 7.9–16.6) and 24.4 months (95% CI: 20.2–28.6), respectively, while in the DLL3 negative group, the median PFS and median OS were 13.4 months (95% CI: 10.5–16.3) and 25.9 months (95% CI: 18.9–32.9), respectively. The log-rank test showed that there was no significant difference in the prognosis between the DLL3 negative group and the DLL3 positive group (Fig. a). Moreover, no significant difference in survival was observed between high and low DLL3 expression groups, regardless of whether the cutoff value defined by high DLL3 expression was 50% or 75% (Fig. b and c). High levels of DLL3 expression are found in carcinoid subtypes of pulmonary NETs that exhibit significant dendritic cell infiltration, suggesting its potential as a viable target for focused intervention. Previous studies have reported a 65% DLL3 expression rate in 37 patients with LCNEC . Analysis of clinical trial groups revealed DLL3 expression in over 70% of SCLC cases , . A high expression rate of DLL3 was also observed in patients with castration-resistant neuroendocrine prostate cancer . A previous study reported a DLL3 expression rate of 76.9% in 13 cases of gastrointestinal-pancreatic NECs . We first assessed DLL3 positivity in a large sample of GEP-NECs ( n = 248) and found a positivity rate of 54.8%. Our study revealed different DLL3 expression rates between SCNEC and LCNEC, with SCNEC exhibiting a higher positivity rate than LCNEC. Regarding primary site distribution, the esophagus exhibited a high DLL3 expression rate, which was largely attributed to the high proportion of SCNEC in esophageal NECs. Similarly, this cohort showed a high expression rate of DLL3 in T1 stage patients, partly because SCNEC had an absolute advantage in the number of T1-stage patients (7/8). A single-cell transcriptome sequencing study of SCLC tumor cells demonstrated that chemotherapy resulted in decreased expression of therapeutic target genes, including DLL3 . However, our results showed that DLL3 has a higher expression rate in patients receiving chemotherapy before baseline treatment, which prompts future research to focus on whether chemotherapy affects the expression of the DLL3 gene. Several studies have adopted various definitions for DLL3-high expression. When defined as staining in ≥ 50% of tumor cells, the DLL3-high expression rates in SCLC range from 32% to 79.5% , , . With a cutoff value of 75%, the positive rate of high expression was 70% in another study on SCLC . Conversely, high expression rates of DLL3 in lung LCNEC are relatively low at 54% . In this study, we applied 50% and 75% cutoff values to define DLL3-high expression in GEP-NECs, exploring the DLL3-high expression rate for the first time. Although these cutoff values have been frequently used in small cell lung cancer (SCLC), their application to GEP-NECs has not been previously reported. Our results showed a 31.8% expression rate of DLL3 with a cutoff value of 50% and a 25.4% expression rate of DLL3 with a cutoff value of 75%. Since GEP-NECs contains SCNEC and LCNEC, we analyzed the high expression rates in different subtypes (Supplementary Table 1). Irrespective of the cutoff value, the expression rate of DLL3 in SCNEC is always higher than that in LCNEC. Furthermore, the expression rate of DLL3 in the digestive system was lower than that in high-grade neuroendocrine tumors of the lung, regardless of the cutoff value. Recent studies in SCLC indicate significant improvements in ORR and PFS in patients with high expression of DLL3 (defined as positive staining in ≥ 50% of tumor cells); the high expression group demonstrated significant benefits in confirmed objective response (35% versus 0%) and disease control (90% versus 60%) compared to the low expression group of DLL3 . Additionally, in another phase II clinical trial, the ORR was 14.3% in the DLL3 high-expression group, with high expression defined as positive staining in ≥ 75% of tumor cells . In phase I/II studies of multiple cancer types, a higher remission rate was observed with high DLL3 expression . However, some patients with DLL3 expression in < 50% of tumor cells also exhibit stable disease for a prolonged duration , . Hence, in the following clinical phases, the assessment should focus on evaluating the efficacy in patients exhibiting any measurable DLL3 levels. The field of immuno-oncology (IO) has revolutionized the landscape of cancer therapy, actively influencing patient outcomes . PD-L1 has been identified as the first biomarker for anti-PD-1 therapy and is included in the prescribing information for pembrolizumab. Currently, other immune therapy efficacy-related markers include tumor mutational burden (TMB), mismatch repair system deficiency (dMMR), high microsatellite instability (MSI-H), neoantigens, mutations in antigen presentation pathways, and circulating tumor DNA (ctDNA) as indicators for selecting patients who may benefit from immunotherapy . In a study on pancreatic ductal adenocarcinoma, high DLL3 expression was positively correlated with PD-L1/2 expression . Another study in SCLC also observed cases with high DLL3 expression and negative NOTCH1 expression to have a higher PD-L1 expression rate, revealing a favorable prognosis in such SCLC patients . However, our study did not find any correlation between DLL3 and PD-L1 expression in GEP-NECs. Although no significant correlation was observed, in our study cohort, 9 out of 40 (22.5%) patients co-expressed DLL3 and PD-L1, suggesting the feasibility of combined DLL3-targeted therapy and immunotherapy in some patients and providing a theoretical basis for developing new treatment strategies for patients with GEP-NEC, which requires further clinical validation. In recent studies, four subtypes of SCLC have been identified based on the unique expression patterns of four key transcripts: SCLC-A, SCLC-N, SCLC-P, and SCLC-Y . Baine et al. confirmed higher DLL3 expression in ASCL1-high SCLC compared to other groups . Wang et al. classified NECs into five subtypes (ASCL1, NEUROD1, HNF4A, POU2F3, and YAP1) across various tumor sites (including GEP) . Although our study did not further classify subtypes, we first verified the expression of ASCL1 and DLL3 in 111 available tumor tissues, showing a significantly higher DLL3 expression rate in the ASCL1-positive group (likely corresponding to the A subtype) compared to the ASCL1-negative group (likely corresponding to non-A subtypes) at the protein level, confirming a significant correlation. However, this correlation was only evident in small cell GEP-NECs and was not seen in large cell or mixed types, which was quite different from that observed with pulmonary neuroendocrine tumors. The correlation of DLL3 and ASCL1 expression was observed in both SCLC and LCNEC of the lung , , . Nonetheless, these results suggest that patients with the ASCL1 subtype may benefit from DLL3-targeted therapy. Furthermore, the study by Wang et al. revealed that A and N types of GEP-NEC are classified as NE-high types . Similarly, in our cohort, we found that among the 13 ASCL1-positive cases that underwent NE staining in available tissues, 10 cases were positive for ≥ 2 NE markers. Additionally, in tumors positive for ≥ 2 NE markers, the expression rate of DLL3 was higher than that of the subgroup of < 2 NE markers, which is consistent with the results observed in pulmonary neuroendocrine carcinomas . It has been found that a classic pathway in the development of SCLC involves tumor initiation under the premise of bi-allelic mutations in RB1 and TP53 , driven by NOTCH signaling inactivation . While the roles of the DLL3/Notch pathway and the loss of RB1 and TP53 mutations in the development of GEP-NECs have not been extensively studied, in high-grade neuroendocrine tumors of the digestive system, TP53 and RB mutations are the most common genetic alterations, differing from the frequently occurring MEN1 , ATRX/DAXX , and mTOR pathway activation in NET . This may explain why DLL3 is negative in well-differentiated neuroendocrine tumors, but shows positive reactions in some high-grade neuroendocrine tumors, suggesting that part of digestive neuroendocrine tumors may share a similar carcinogenic pathway with SCLC. However, this speculation needs further confirmation in future studies. DLL3 staining in all GACs cases showed negative results regardless of the degree of differentiation. Among the nine cases of MiNENs comprising a mixture of adenocarcinoma and NECs, no positive signal was detected in the adenocarcinoma component. Hence, DLL3 staining may serve as a tool to differentiate gastric NECs from GACs with 100% specificity, particularly in biopsy samples where limited tissue samples may hinder a definitive diagnosis. Based on the classification established by the 2010 World Health Organization, NENs were divided into highly differentiated NETs (NET G1 and NET G2) and poorly differentiated NECs . Despite being highly proliferative, certain tumors progress slowly and have a favorable prognosis, while others follow an aggressive course. To address this heterogeneity, a subgroup termed NET G3 was introduced into the GEP-NENs classification system, representing an intermediate prognosis between NET G1/G2 and NEC . Nevertheless, distinguishing NET G3 with a relatively favorable prognosis from poorly differentiated NECs remains challenging for pathologists. Currently, a comprehensive diagnostic approach combining clinical performance, morphology, immunohistochemistry, and molecular biomarkers is preferred. These biomarkers include RB, DAXX/ATRX, SSTR2, CgA, and p53 . No large-sample studies have confirmed the value of DLL3 in distinguishing NET G3 from NEC. A small-sample study on GEP-NENs found no DLL3 expression in NET G3 (0/5) . Chen et al. utilized various methods, including WES, FISH, qPCR, and IHC, to identify DLL3 expression in three out of eight GEP-NET G3 cases, with no abnormal DLL3 expression in NET G2 . Our study marks the first discovery of DLL3 protein expression in GEP-NET G3 with a relatively large sample size, suggesting the differential diagnostic value of DLL3,and calling for further research on DLL3-targeting agents in NET G3. Additionally, there was no significant difference in DLL3 expression among primary tumors, lymph node metastases, and distant metastases. Therefore, DLL3 staining in metastases appears to reflect the condition of primary tumors, suggesting the feasibility of applying DLL3-targeted therapy in advanced patients with distant organ metastases. In a study on DLL3 expression in SCLC without targeted therapy, patient prognosis was found to be unrelated to DLL3 expression status, with high or low expression (≥ 50% cutoff) showing no impact on survival . Similar findings were observed in the GEP-NECs cohort in our study. However, another study encompassing SCLC, carcinoid, and atypical carcinoid cases noted that DLL3-high expression (≥ 50% positive tumor cells) correlated with improved OS in SCLC ( p = 0.049), without adjusting for age, tumor dimension, and stage . Conversely, a small-sample cohort study of GEP-NENs ( n = 46) observed significantly better PFS and OS in the DLL3-negative group, largely because of the predominance of NET G1/G2/G3 cases in their study cohort . This study was the largest GEP-NECs cohort to date and found that the expression of DLL3 has no relationship with the prognosis of patients with GEP-NECs. This study had some limitations. Being a single-center study with a modest sample size, future research should involve larger cohorts to validate the differences in DLL3 expression across primary tumor sites and further assess the diagnostic performance between NET G3 and NECs in the digestive system. Although cytoplasmic and/or membranous staining was considered positive, as reported by other studies , , the presence of nuclear positivity remains poorly understood. This discrepancy highlights a limitation in our study, as the relationship between DLL3 staining type (cytoplasmic, membranous, punctata and nuclear) and therapeutic response to DLL3-targeted therapy remains unclear. Further investigation is required to elucidate whether different DLL3 staining patterns are associated with varying clinical outcomes. Moreover, the study’s retrospective nature leads to potential biases stemming from progression in clinical practices over time. This highlights the necessity for prospective studies to more effectively clarify the influence of DLL3 expression on patient prognosis. Our study confirmed DLL3 expression in GEP-NECs and found that DLL3 expression was related to the SCNEC subtype and chemotherapy. The DLL3 expression rate in NET G3 supports the application of DLL3-targeted therapy in high-grade NETs of the digestive system. DLL3 IHC was conducive for distinguishing between NET G3 and NECs, as well as GACs and NECs. Finally, we confirmed the correlation between DLL3 and ASCL1 protein expression in GEP-NECs. Our study suggests that DLL3 expression is not a prognostic factor in patients with GEP-NECs. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 |
Vascular endothelial growth factor-A is an Immunohistochemical biomarker for the efficacy of bevacizumab-containing chemotherapy for duodenal and jejunal adenocarcinoma | 7c10b143-3e3f-4286-a884-8c307ac21d97 | 8406832 | Anatomy[mh] | Although the small bowel comprises 75% of the total length and more than 90% of the mucosal surface area of the gastrointestinal tract , small bowel adenocarcinoma (SBA) as a primary tumour location is very rare, comprising only 1 to 3% of all gastrointestinal cancers. SBA accounts for 30 to 40% of all small bowel cancers and the annual incidence of SBA is approximately 3.9 per million persons in the United States and 5.0 per million persons in Europe . Because of the delayed manifestation of symptoms and the difficulty screening the entire small bowel by conventional esophagogastroduodenoscopy and colonoscopy [ , – ], 30 to 35% of SBA is diagnosed with distant metastases [ – ]. We recently demonstrated that bevacizumab in combination with platinum-based chemotherapy is effective and well-tolerated for metastatic SBA (mSBA) , consistent with other reports [ – ]. Furthermore, Legue et al. reported that bevacizumab is effective for metastatic ileal adenocarcinoma (mIA), but it has remained unclear whether bevacizumab is also effective for metastatic duodenal and jejunal adenocarcinoma (mDJA). Bevacizumab is an anti-vascular endothelial growth factor (VEGF) monoclonal antibody that binds to VEGF-A and prevents its binding to VEGF receptors on endothelial and cancer cells . Overman et al. reported that characterisation of VEGF-A expression has potential benefit for a VEGF-targeted therapeutic strategy in SBA. Rohrberg et al. reported that immunohistochemical expression of VEGF-A could be a biomarker for the efficacy of bevacizumab in upper gastrointestinal cancers, including metastatic gastric cancer (GC). The potential use of VEGF-A expression as a biomarker for the efficacy of bevacizumab for mDJA, however, has not yet been evaluated. Mucinous immunophenotypic classification (i.e., intestinal [I]-type, gastrointestinal [GI]-type, gastric [G]-type, or null [N]-type) using CD10, MUC2, MUC5AC, and MUC6 have been investigated in GC and colorectal cancer (CRC). The mucinous immunophenotype is reported to be a prognostic factor in GC and useful for evaluating the biological behaviour and tumorigenesis of CRC . The usefulness of this classification for evaluating the prognosis or tumorigenesis in mSBA, however, has not been investigated. Furthermore, immunohistochemical investigation, including VEGF-A expression and mucinous immunophenotypic classification, regarding the use of bevacizumab in patients with mDJA has not been performed. The aim of the present study was to comprehensively analyse immunohistochemical expression, including VEGF-A expression, and to explore the usefulness of immunohistochemical expression for determining the first-line chemotherapy, especially in combination with bevacizumab, for patients with mDJA.
Patients This was a retrospective multicentre study. From January 2008 to December 2017, we enrolled patients over 16 years of age who were histologically diagnosed with adenocarcinoma of the duodenum (excluding the ampulla of Vater), jejunum, or ileum, and had received palliative chemotherapy for unresectable disease or disease recurrence with residual specimens sufficient for immunohistochemical staining at 15 hospitals participating in the Osaka Gut Forum. This study was performed in accordance with the Declaration of Helsinki, and the ethics committees of each individual institution approved the study. Written informed consent was waived by the ethics committees by providing participants the opportunity to opt out of the study. Data collection The following data were obtained from the medical records at each institution: patient characteristics (age, sex, Eastern Cooperative Oncology Group performance status [PS]) , primary tumour locations (duodenum excluding the ampulla of Vater, jejunum, or ileum), histological type (differentiated/undifferentiated) , tumour biomarker level (serum carcinoembryonic antigen [CEA] and carbohydrate antigen 19–9 [CA19–9]), the number of metastatic organs, and metastatic site (liver, lung, lymph node or peritoneal dissemination). Best response to chemotherapy was evaluated according to the Response Evaluation Criteria in Solid Tumours (version 1.1) . The National Cancer Institute Common Terminology Criteria (version 4.0) was used to evaluate the toxicity of therapeutics. Progression-free survival (PFS) was defined as the duration from the initiation of chemotherapy until the date of disease progression. Overall survival (OS) was defined as the duration from the initiation of chemotherapy until death, loss of follow-up, or current date. Surviving patients were censored on their last follow-up date. Treatment The patients were divided into 3 groups according to the first-line chemotherapy regimen based on the use of bevacizumab and chemotherapy with fluoropyrimidine and platinum: Bevacizumab+ Platinum Group, patients who received bevacizumab in combination with CAPOX or modified FOLFOX6 (mFOLFOX6); Platinum Group, patients who received fluoropyrimidine and platinum without bevacizumab; Monotherapy Group, patients who received monotherapy with fluoropyrimidine or other because they were considered not able to tolerate combination therapy due to advanced age, low PS, etc. These treatments were generally repeated until disease progression, unacceptable toxicity, or a patient’s request to terminate treatment. The chemotherapy regimens for each group were as follows: Bevacizumab+ Platinum Group. Combined use of bevacizumab with CAPOX: 7.5 mg/kg bevacizumab and oxaliplatin (130 mg/m 2 ) intravenously on day 1, and capecitabine (2000 mg/m 2 /day) orally on days 1–14, every 3 weeks. Combined use of bevacizumab with mFOLFOX6: 5 mg/kg bevacizumab, l-leucovorin (LV; 200 mg/m 2 ), oxaliplatin (85 mg/m 2 ), and bolus 5-FU (400 mg/m 2 ), followed by infusion of 5-FU (2400 mg/m 2 ) for 46 h every 2 weeks. Platinum Group. CAPOX, mFOLFOX6: same as above. SP: tegafur, gimeracil, and oteracil potassium (S-1) (80 mg/m 2 /day) orally on days 1–14 and cisplatin (60 mg/m 2 ) intravenously on day 8 every 5 weeks. SOX: oxaliplatin (100 mg/m 2 ) intravenously on day 1 and S-1 (80 mg/m 2 /day) orally on days 1–14 every 3 weeks. Monotherapy Group. S-1 (80 mg/m 2 /day) orally for 28 days every 6 weeks. Capecitabine (1250 mg/m 2 /day) orally for 14 days every 3 weeks. Uracil and tegafur (UFT; 300 mg/m 2 /day) orally for 28 days every 5 weeks. Gemcitabine (GEM; 1000 mg/m 2 ) intravenously on days 1, 8, and 15 every 4 weeks. 5-FU+ LV: 5-FU (600 mg/m 2 ) bolus plus LV (250 mg/m 2 ) once a week for 6 weeks every 8 weeks. Docetaxel (DTX; 60 mg/m 2 ) intravenously on day 1 every 3 weeks. Immunohistochemistry Paraffin blocks or unstained slides were collected at the Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine. All specimens were fixed in formalin, embedded in paraffin, and cut into 4-μm thick sections for immunohistochemistry (IHC) and haematoxylin and eosin staining. The primary antibodies for IHC are listed in Supplemental Table . Staining was conducted on the Dako Autostainer Link 48 platform (Agilent, Santa Clara, CA, USA) with an automated staining protocol. Immunohistochemically stained slides were independently evaluated by 2 of 3 certified gastroenterologists (T.A., T.T., and S.I.) who were blind to the clinicopathological information, and cases with different interpretations were assessed by a certified pathologist (E.M.). CD10 was expressed in a cytoplasmic pattern with membranous accentuation. MUC2, MUC5AC, MUC6, VEGF-A, and β-catenin were expressed in the cytoplasm of the tumour cells (Supplemental Figure a). TP53 and Ki67 were expressed in the nucleus of the tumour cells (Supplemental Figure a). Immunohistochemically stained slides were evaluated as follows: CD10, MUC2, MUC5AC, MUC6, and β-catenin were evaluated as positive if more than 5% of the tumour cells were stained, and VEGF-A, TP53, and Ki67 were evaluated as high if over 50% of the tumour cells were stained. Mismatch repair (MMR) protein (MLH1, MSH2, MSH6, and PMS2) was evaluated as negative when all tumour cells showed loss of nuclear staining compared with infiltrating lymphocytes as a positive internal control (Supplemental Figure b) and tumours with loss of any MMR protein were labelled as MMR protein-deficient (MMRD). Mucinous immunophenotypic classification According to combined CD10 and mucinous immunophenotypes, we classified all cases, as shown in Supplemental Table , as intestinal type (I-type, CD10+ or MUC2+/MUC5AC−/MUC6-), gastrointestinal type (GI-type, CD10+ or MUC2+/MUC5AC+ or MUC6+), gastric type (G-type, CD10−/MUC2−/MUC5AC+ or MUC6+), or null type (N-type, CD10−/MUC2−/MUC5AC−/MUC6-), as previously reported for the duodenum , GC , and CRC . Statistical analysis Continuous variables are presented as the median and interquartile range. Categorical valuables are presented as frequencies. Differences in the distribution of variables were evaluated using Fisher’s exact test. PFS and OS were estimated by the Kaplan-Meier method using the log-rank test. The hazard ratio (HR) and corresponding 95% confidence interval (CI) were estimated by univariate and multivariate Cox proportional hazards models with stratification variables and other relevant covariates (immunohistochemical expression and immunophenotypes). Variables determined to be significant in the univariate analysis were selected for the multivariate analysis. All reported P -values were 2-sided, and P < .05 was considered statistically significant. Statistical analyses were performed using JMP statistical software (version 14.3.0; SAS Institute, Inc., Cary, NC, USA).
This was a retrospective multicentre study. From January 2008 to December 2017, we enrolled patients over 16 years of age who were histologically diagnosed with adenocarcinoma of the duodenum (excluding the ampulla of Vater), jejunum, or ileum, and had received palliative chemotherapy for unresectable disease or disease recurrence with residual specimens sufficient for immunohistochemical staining at 15 hospitals participating in the Osaka Gut Forum. This study was performed in accordance with the Declaration of Helsinki, and the ethics committees of each individual institution approved the study. Written informed consent was waived by the ethics committees by providing participants the opportunity to opt out of the study.
The following data were obtained from the medical records at each institution: patient characteristics (age, sex, Eastern Cooperative Oncology Group performance status [PS]) , primary tumour locations (duodenum excluding the ampulla of Vater, jejunum, or ileum), histological type (differentiated/undifferentiated) , tumour biomarker level (serum carcinoembryonic antigen [CEA] and carbohydrate antigen 19–9 [CA19–9]), the number of metastatic organs, and metastatic site (liver, lung, lymph node or peritoneal dissemination). Best response to chemotherapy was evaluated according to the Response Evaluation Criteria in Solid Tumours (version 1.1) . The National Cancer Institute Common Terminology Criteria (version 4.0) was used to evaluate the toxicity of therapeutics. Progression-free survival (PFS) was defined as the duration from the initiation of chemotherapy until the date of disease progression. Overall survival (OS) was defined as the duration from the initiation of chemotherapy until death, loss of follow-up, or current date. Surviving patients were censored on their last follow-up date.
The patients were divided into 3 groups according to the first-line chemotherapy regimen based on the use of bevacizumab and chemotherapy with fluoropyrimidine and platinum: Bevacizumab+ Platinum Group, patients who received bevacizumab in combination with CAPOX or modified FOLFOX6 (mFOLFOX6); Platinum Group, patients who received fluoropyrimidine and platinum without bevacizumab; Monotherapy Group, patients who received monotherapy with fluoropyrimidine or other because they were considered not able to tolerate combination therapy due to advanced age, low PS, etc. These treatments were generally repeated until disease progression, unacceptable toxicity, or a patient’s request to terminate treatment. The chemotherapy regimens for each group were as follows: Bevacizumab+ Platinum Group. Combined use of bevacizumab with CAPOX: 7.5 mg/kg bevacizumab and oxaliplatin (130 mg/m 2 ) intravenously on day 1, and capecitabine (2000 mg/m 2 /day) orally on days 1–14, every 3 weeks. Combined use of bevacizumab with mFOLFOX6: 5 mg/kg bevacizumab, l-leucovorin (LV; 200 mg/m 2 ), oxaliplatin (85 mg/m 2 ), and bolus 5-FU (400 mg/m 2 ), followed by infusion of 5-FU (2400 mg/m 2 ) for 46 h every 2 weeks. Platinum Group. CAPOX, mFOLFOX6: same as above. SP: tegafur, gimeracil, and oteracil potassium (S-1) (80 mg/m 2 /day) orally on days 1–14 and cisplatin (60 mg/m 2 ) intravenously on day 8 every 5 weeks. SOX: oxaliplatin (100 mg/m 2 ) intravenously on day 1 and S-1 (80 mg/m 2 /day) orally on days 1–14 every 3 weeks. Monotherapy Group. S-1 (80 mg/m 2 /day) orally for 28 days every 6 weeks. Capecitabine (1250 mg/m 2 /day) orally for 14 days every 3 weeks. Uracil and tegafur (UFT; 300 mg/m 2 /day) orally for 28 days every 5 weeks. Gemcitabine (GEM; 1000 mg/m 2 ) intravenously on days 1, 8, and 15 every 4 weeks. 5-FU+ LV: 5-FU (600 mg/m 2 ) bolus plus LV (250 mg/m 2 ) once a week for 6 weeks every 8 weeks. Docetaxel (DTX; 60 mg/m 2 ) intravenously on day 1 every 3 weeks.
Paraffin blocks or unstained slides were collected at the Department of Gastroenterology and Hepatology, Osaka University Graduate School of Medicine. All specimens were fixed in formalin, embedded in paraffin, and cut into 4-μm thick sections for immunohistochemistry (IHC) and haematoxylin and eosin staining. The primary antibodies for IHC are listed in Supplemental Table . Staining was conducted on the Dako Autostainer Link 48 platform (Agilent, Santa Clara, CA, USA) with an automated staining protocol. Immunohistochemically stained slides were independently evaluated by 2 of 3 certified gastroenterologists (T.A., T.T., and S.I.) who were blind to the clinicopathological information, and cases with different interpretations were assessed by a certified pathologist (E.M.). CD10 was expressed in a cytoplasmic pattern with membranous accentuation. MUC2, MUC5AC, MUC6, VEGF-A, and β-catenin were expressed in the cytoplasm of the tumour cells (Supplemental Figure a). TP53 and Ki67 were expressed in the nucleus of the tumour cells (Supplemental Figure a). Immunohistochemically stained slides were evaluated as follows: CD10, MUC2, MUC5AC, MUC6, and β-catenin were evaluated as positive if more than 5% of the tumour cells were stained, and VEGF-A, TP53, and Ki67 were evaluated as high if over 50% of the tumour cells were stained. Mismatch repair (MMR) protein (MLH1, MSH2, MSH6, and PMS2) was evaluated as negative when all tumour cells showed loss of nuclear staining compared with infiltrating lymphocytes as a positive internal control (Supplemental Figure b) and tumours with loss of any MMR protein were labelled as MMR protein-deficient (MMRD).
According to combined CD10 and mucinous immunophenotypes, we classified all cases, as shown in Supplemental Table , as intestinal type (I-type, CD10+ or MUC2+/MUC5AC−/MUC6-), gastrointestinal type (GI-type, CD10+ or MUC2+/MUC5AC+ or MUC6+), gastric type (G-type, CD10−/MUC2−/MUC5AC+ or MUC6+), or null type (N-type, CD10−/MUC2−/MUC5AC−/MUC6-), as previously reported for the duodenum , GC , and CRC .
Continuous variables are presented as the median and interquartile range. Categorical valuables are presented as frequencies. Differences in the distribution of variables were evaluated using Fisher’s exact test. PFS and OS were estimated by the Kaplan-Meier method using the log-rank test. The hazard ratio (HR) and corresponding 95% confidence interval (CI) were estimated by univariate and multivariate Cox proportional hazards models with stratification variables and other relevant covariates (immunohistochemical expression and immunophenotypes). Variables determined to be significant in the univariate analysis were selected for the multivariate analysis. All reported P -values were 2-sided, and P < .05 was considered statistically significant. Statistical analyses were performed using JMP statistical software (version 14.3.0; SAS Institute, Inc., Cary, NC, USA).
Clinicopathological characteristics A total of 75 patients with mSBA were included in the study and 1 patient was excluded from the study due to insufficient SBA material for the analysis. Clinicopathological characteristics of the 74 remaining patients with mSBA are provided in Table . Of the 74 patients, 45 (60.8%) were older than 65 years of age, 49 (66.2%) were men, and 50 patients (67.6%) were PS 0. Primary tumour location was the duodenum in 38 patients (51.3%), jejunum in 27 (36.5%), and ileum in 9 (12.2%). The histological type of mSBA was differentiated-type in 57 patients (77.0%). The proportions of clinicopathological characteristics were comparable between patients with mDJA and those with mIA. The number of patients receiving each type of first-line chemotherapy is shown in Supplemental Table . Of the 74 cases, 16 (21.6%), 39 (52.7%), and 19 (25.7%) were classified into the Bevacizumab+ Platinum, Platinum, and Monotherapy Groups, respectively. Immunohistochemical expression Immunohistochemical expression data from the 74 patients with mSBA are shown in Table . Specimens were obtained by biopsy in 35 patients (47.3%) and by surgery in 39 patients (52.7%). Expression of VEGF-A was high in 42 patients (56.8%). Expression of CD10, MUC2, MUC5AC, and MUC6 was evaluated as positive in 55 (74.3%), 59 (79.7%), 45 (60.8%), and 29 (39.2%) of the patients, respectively. On the basis of mucinous immunophenotyping, 23 patients (31%) were classified as having I-type, 45 (60.8%) as having GI-type, 5 (6.8%) as having G-type, and 1 (1.4%) as having N-type of mSBA. The percentage of patients with I-type was significantly lower in those with mDJA (24.6%) than in those with mIA (77.8%, P = 0.003), and conversely, GI-type was significantly higher in those with mDJA (66.2%) than in those with mIA (22.2%, P = 0.023). Efficacy of bevacizumab-containing chemotherapy for patients with mSBA The efficacy of bevacizumab-containing chemotherapy was investigated by stratifying patients into those with mDJA or mIA. In those with mIA, the OS in the Bevacizumab+ Platinum Group (51 months [19–94]) was significantly longer than that in the Platinum Group (17.5 months [12–23], P = 0.047; Supplemental Figure b), as previously reported . We also found that in those with mDJA, both the PFS and OS in the Bevacizumab+ Platinum Group (15 months [1-] and 26 months [5-]) tended to be longer than those in the Platinum Group (7 [5–9] and 17 [8–22], P = 0.075 and P = 0.077; Supplemental Figure c and d, respectively). VEGF-A expression as a factor for prolonging PFS and OS in patients with mDJA When we searched for factors associated with a prolonged PFS and OS in mDJA, univariate analysis followed by multivariate analysis revealed that high VEGF-A expression was a significant factor for prolonging PFS (HR, 0.58, 95% CI, 0.34–0.99; Table ) and a possible factor for prolonging OS (HR, 0.56, 95%CI, 0.31–1.01; Supplemental Table ). The clinicopathological characteristics and immunohistochemical expression were not significantly different between patients with high VEGF-A expression and those with low VEGF-A expression (Supplemental Table ). We then investigated the PFS and the OS among mDJA patients with high or low VEGF-A expression. The PFS was significantly longer in patients with high VEGF-A expression (median [95% CI] 9 months [4–10]) than in those with low VEGF-A expression (5 months [1–7], P = 0.018; Fig. a) and the OS tended to be longer in those with high VEGF-A expression (20 months [15–24]) than in those with low VEGF-A expression (7 months [5–14], P = 0.059; Supplemental Figure a). In the Bevacizumab+ Platinum Group, the PFS was significantly longer in patients with high VEGF-A expression (26 months [15-]) than in those with low VEGF-A expression (5 months [1–9], P = 0.001; Fig. b) and the OS tended to be longer in patients with high VEGF-A expression than in those with low VEGF-A expression ( P = 0.062; Supplemental Figure b). In the Platinum Group, neither the PFS nor the OS differed significantly between patients with high VEGF-A expression (6.5 months [4–10] and 18 months [11–22]) and patients with low VEGF-A expression (7 months [2–7] and 11 months [4–41], P = 0.636 and P = 0.482; Fig. c and Supplemental Figure c). VEGF-A expression and bevacizumab treatment for patients with mDJA We next investigated the PFS and OS among the treatment groups by stratifying patients with mDJA into groups with high or low VEGF-A expression. In patients with high VEGF-A expression, the PFS was significantly longer in the Bevacizumab+ Platinum Group (26 months [15-]) than in the Platinum Group (6.5 months [4–10], P = 0.025; Fig. a). In addition, the OS tended to be longer in the Bevacizumab+ Platinum Group than in the Platinum Group ( P = 0.056; Fig. b). In patients with low VEGF-A expression, neither the PFS nor the OS differed significantly between the Bevacizumab+ Platinum and Platinum Groups ( P = 0.519 and P = 0.642; Fig. c, d). Toxicity Finally, patients were evaluated in terms of treatment-related toxicity. The proportion of patients with Grade 3 to 4 toxicity did not differ significantly between the Bevacizumab+ Platinum Group (50.0%) and Platinum Group (35.9%, P = 0.375; Table ). The proportion tended to be smaller (16.7%) in the Monotherapy Group than in the Platinum Group ( P = 0.214).
A total of 75 patients with mSBA were included in the study and 1 patient was excluded from the study due to insufficient SBA material for the analysis. Clinicopathological characteristics of the 74 remaining patients with mSBA are provided in Table . Of the 74 patients, 45 (60.8%) were older than 65 years of age, 49 (66.2%) were men, and 50 patients (67.6%) were PS 0. Primary tumour location was the duodenum in 38 patients (51.3%), jejunum in 27 (36.5%), and ileum in 9 (12.2%). The histological type of mSBA was differentiated-type in 57 patients (77.0%). The proportions of clinicopathological characteristics were comparable between patients with mDJA and those with mIA. The number of patients receiving each type of first-line chemotherapy is shown in Supplemental Table . Of the 74 cases, 16 (21.6%), 39 (52.7%), and 19 (25.7%) were classified into the Bevacizumab+ Platinum, Platinum, and Monotherapy Groups, respectively.
Immunohistochemical expression data from the 74 patients with mSBA are shown in Table . Specimens were obtained by biopsy in 35 patients (47.3%) and by surgery in 39 patients (52.7%). Expression of VEGF-A was high in 42 patients (56.8%). Expression of CD10, MUC2, MUC5AC, and MUC6 was evaluated as positive in 55 (74.3%), 59 (79.7%), 45 (60.8%), and 29 (39.2%) of the patients, respectively. On the basis of mucinous immunophenotyping, 23 patients (31%) were classified as having I-type, 45 (60.8%) as having GI-type, 5 (6.8%) as having G-type, and 1 (1.4%) as having N-type of mSBA. The percentage of patients with I-type was significantly lower in those with mDJA (24.6%) than in those with mIA (77.8%, P = 0.003), and conversely, GI-type was significantly higher in those with mDJA (66.2%) than in those with mIA (22.2%, P = 0.023).
The efficacy of bevacizumab-containing chemotherapy was investigated by stratifying patients into those with mDJA or mIA. In those with mIA, the OS in the Bevacizumab+ Platinum Group (51 months [19–94]) was significantly longer than that in the Platinum Group (17.5 months [12–23], P = 0.047; Supplemental Figure b), as previously reported . We also found that in those with mDJA, both the PFS and OS in the Bevacizumab+ Platinum Group (15 months [1-] and 26 months [5-]) tended to be longer than those in the Platinum Group (7 [5–9] and 17 [8–22], P = 0.075 and P = 0.077; Supplemental Figure c and d, respectively).
When we searched for factors associated with a prolonged PFS and OS in mDJA, univariate analysis followed by multivariate analysis revealed that high VEGF-A expression was a significant factor for prolonging PFS (HR, 0.58, 95% CI, 0.34–0.99; Table ) and a possible factor for prolonging OS (HR, 0.56, 95%CI, 0.31–1.01; Supplemental Table ). The clinicopathological characteristics and immunohistochemical expression were not significantly different between patients with high VEGF-A expression and those with low VEGF-A expression (Supplemental Table ). We then investigated the PFS and the OS among mDJA patients with high or low VEGF-A expression. The PFS was significantly longer in patients with high VEGF-A expression (median [95% CI] 9 months [4–10]) than in those with low VEGF-A expression (5 months [1–7], P = 0.018; Fig. a) and the OS tended to be longer in those with high VEGF-A expression (20 months [15–24]) than in those with low VEGF-A expression (7 months [5–14], P = 0.059; Supplemental Figure a). In the Bevacizumab+ Platinum Group, the PFS was significantly longer in patients with high VEGF-A expression (26 months [15-]) than in those with low VEGF-A expression (5 months [1–9], P = 0.001; Fig. b) and the OS tended to be longer in patients with high VEGF-A expression than in those with low VEGF-A expression ( P = 0.062; Supplemental Figure b). In the Platinum Group, neither the PFS nor the OS differed significantly between patients with high VEGF-A expression (6.5 months [4–10] and 18 months [11–22]) and patients with low VEGF-A expression (7 months [2–7] and 11 months [4–41], P = 0.636 and P = 0.482; Fig. c and Supplemental Figure c).
We next investigated the PFS and OS among the treatment groups by stratifying patients with mDJA into groups with high or low VEGF-A expression. In patients with high VEGF-A expression, the PFS was significantly longer in the Bevacizumab+ Platinum Group (26 months [15-]) than in the Platinum Group (6.5 months [4–10], P = 0.025; Fig. a). In addition, the OS tended to be longer in the Bevacizumab+ Platinum Group than in the Platinum Group ( P = 0.056; Fig. b). In patients with low VEGF-A expression, neither the PFS nor the OS differed significantly between the Bevacizumab+ Platinum and Platinum Groups ( P = 0.519 and P = 0.642; Fig. c, d).
Finally, patients were evaluated in terms of treatment-related toxicity. The proportion of patients with Grade 3 to 4 toxicity did not differ significantly between the Bevacizumab+ Platinum Group (50.0%) and Platinum Group (35.9%, P = 0.375; Table ). The proportion tended to be smaller (16.7%) in the Monotherapy Group than in the Platinum Group ( P = 0.214).
To the best of our knowledge, this is the first study to evaluate the relation of the immunohistochemical expression of VEGF-A, which could be applied in clinical practice, to the efficacy of treatment with bevacizumab in combination with platinum-based first-line chemotherapy for patients with mDJA. A strength of the present study is that the immunostaining was centrally performed with an automated staining protocol and central reading in a multicentre setting. Although the multivariate analysis in Table suggest that high expression of VEGF-A would be the prognostic factor but not the chemotherapy with bevacizumab, we also demonstrated in Fig. b and c that the PFS was significantly longer in patients with high VEGF-A expression than in those with low VEGF-A expression in the Bevacizumab+ Platinum group, but not in the Platinum group. These data suggest that the clinical value of bevacizumab in mDJA can be demonstrated when immunohistochemical VEGF-A is high. Furthermore, the results in Fig. demonstrated that patients with mDJA having high VEGF-A expression who received platinum-based chemotherapy with bevacizumab as a first-line treatment had longer PFS and OS than those without bevacizumab. On the other hand, neither the PFS nor the OS of patients with low VEGF-A expression differed significantly between those treated with or without bevacizumab. The potential for immunohistochemical expression of VEGF-A to serve as a molecular biomarker for selecting bevacizumab-containing chemotherapy for patients with mDJA has not been evaluated previously. Thus, we first demonstrated that immunohistochemical expression of VEGF-A has potential as a biomarker for predicting the efficacy of bevacizumab-containing first-line chemotherapy in patients with mDJA. The tumorigenesis of SBA reportedly differs from that of CRC in some aspects despite their morphological similarities. Immunohistochemical investigation of tumorigenic pathways in SBA and CRC revealed that positive β-catenin expression is less frequent in SBA (19.2 to 19.6%) than in CRC (78.6%), although the proportion of patients with high TP53 expression in SBA (41.6 to 53.8%) is similar to that in CRC (43.5%) and the proportion of patients with MMRD in SBA (8.0 to 23.0%) is similar to that in CRC (12.5%) [ – ]. The proportions of mSBA patients with positive β-catenin expression (12.1%), high TP53 expression (43.2%), and MMRD (5.4%) in our study were similar to those in previous reports [ , , ]. We analysed each immunohistochemical expression separately in patients with mDJA or mIA. The proportion of mSBA patients with high TP53 expression, high Ki67 expression, positive β-catenin expression and MMRD did not differ between those with mDJA and mIA, consistent with a previous report . Our data indicated that the expression of neither TP53, Ki67, β-catenin, nor MMRD was a factor for prolonging OS or PFS in patients with mSBA. In the present study, we first evaluated the mucinous immunophenotype according to the expression of MUC2, MUC5AC, MUC6, and CD10 in patients with mSBA excluding the ampulla of Vater. Previous mucinous immunophenotypic classifications of GC and CRC revealed that the proportion of I-type is 10 to 30% in GC and 55 to 75% in CRC, and that of GI/G-type is 55 to 80% in GC and 5 to 30% in CRC [ , , , , ]. In the present study, the proportions of I- (77.8%), GI- (22.2%), and G-type (0.0%) mIA were similar to those in CRC, while those of I- (24.6%), GI- (66.2%), and G-type (7.7%) mDJA were similar to those in GC. This finding indicates that application of a suitable chemotherapy regimen depending on the primary tumour location can be useful in mSBA. Although VEGF-A is reported to have a key role in carcinogenesis, and its expression is related to the prognosis in SBA as well as CRC , there are no reports regarding the usefulness of evaluating VEGF-A expression for selecting bevacizumab-containing chemotherapy in patients with mSBA. Our data revealed that VEGF-A expression was a predictive factor for the efficacy of bevacizumab for mDJA, as previously reported for upper gastrointestinal cancers, including metastatic GC . For patients with mIA, only 9 patients were included in the present study and we could not evaluate whether VEGF-A expression was useful for selecting bevacizumab-containing chemotherapy in this group. The present study has several limitations. First, this study was a retrospective study with a small sample size, and a patient selection bias cannot be excluded. Considering that mSBA is a very rare disease, however, this was one of the largest studies evaluating the clinical efficacy of bevacizumab in combination with platinum-based first-line chemotherapy in patients with mSBA. Although we demonstrated the potential of VEGF-A as an immunohistochemical biomarker for selecting bevacizumab-containing first-line chemotherapy with mDJA, the validation study is required to evaluate the result of this study due to the small number of patients. Second, the chemotherapy regimens were not unified because definite regimens have not been approved for mSBA and selection of the regimen was determined by each treating physician. Larger prospective studies are needed to determine the optimal cytotoxic chemotherapy regimen with bevacizumab as the first-line therapy in these patients.
Immunohistochemical expression of VEGF-A has potential as a useful biomarker for predicting the efficacy of bevacizumab-containing first-line chemotherapy in patients with mDJA.
Additional file 1: Supplemental Table 1. Antibodies used in the present study. Supplemental Table 2. Mucinous immunophenotypic classification in the present stud. Supplemental Table 3. First-line chemotherapy regimens used in 74 patients with mSBA. Supplemental Table 4. Univariate and multivariate analyses of immunohistochemical expression, mucinous immunophenotypes, and chemotherapy for prolonging OS in patients with mDJA. Supplemental Table 5. Comparison of clinicopathological characteristics and immunohistochemical expression of mDJA patients with high and low VEGF-A expression. Additional file 2: Supplemental Figure 1. Molecular marker expression profile of CD10, mucins, VEGF-A, TP53, Ki67, β-catenin, and MMRD. (a) CD10 was expressed in a cytoplasmic pattern with membranous accentuation, and MUC2, MUC5AC, and MUC6 were expressed in the cytoplasm. VEGF-A, TP53, and Ki67 were expressed in the cytoplasm, and β-catenin was expressed in the nuclei. (b) When MLH1 was deficient, staining for MLH1 and PMS2 was negative and staining for MSH2 and MSH6 was positive. VEGF-A: vascular endothelial growth factor A. Supplemental Figure 2. Cumulative PFS curve (a) and OS curve (b) of mIA patients and cumulative PFS curve (c) and OS curve (d) of mDJA patients in the Bevacizumab+ Platinum (B+ P) Group, the Platinum (P) Group, and the Monotherapy (M) Group. In mIA patients, the PFS was longer in the B+ P Group (median [95%CI] 17.5 months [5–33]) than in the P Group (7 months [6–8]; P = 0.238) (a). The OS was significantly longer in the B+ P Group (51 months [19–94]) than in the P Group (17.5 months [12–23]; P = 0.047) (b). In mDJA patients, the PFS did not differ significantly between the B+ P Group (15 months [1-]) and the P Group (7 months [5–9]; P = 0.075) (c). The OS was significantly longer in the B+ P Group (26 months [5-]) than in the P Group (17 months [8–22]; P = 0.077) (d). PFS: progression-free survival, OS: overall survival, mIA: metastatic ileal adenocarcinoma, mDJA: metastatic duodenal and jejunal adenocarcinoma. Supplemental Figure 3. Cumulative OS curve of mDJA patients with high VEGF-A expression or low VEGF-A expression (a) in Bevacizumab+ Platinum (B+ P) Group (b) and in Platinum (P) Group (c). The OS tended to be longer in mDJA patients with high VEGF-A expression (median [95%CI] 20 months [15–24]) than in those with low VEGF-A expression (7 months [5–14], P = 0.059) (a). In B+ P Group, the OS tended to be longer in mDJA patients with high VEGF-A expression than in those with low VEGF-A expression ( P = 0.062) (b). In P Group, the OS was significantly longer in mDJA patients with high VEGF-A expression (18 months [11–22]) than in those with low VEGF-A expression (11 months [4–41], P = 0.482) (c). OS: overall survival, mDJA: metastatic duodenal and jejunal adenocarcinoma, VEGF-A: vascular endothelial growth factor A.
|
Parp Inhibitors and Radiotherapy: A New Combination for Prostate Cancer (Systematic Review) | c39e276d-203f-4fee-896e-18901265b6f8 | 10455664 | Internal Medicine[mh] | Prostate cancer (PCa) is the most common tumor diagnosed in men worldwide, ranking as the third most common cause of cancer death in Europe and the second in the United States . However, PCa is a complex and heterogeneous condition with different degrees of aggressiveness, ranging from indolent to lethal forms . This diversity of PCa presentations and stages calls for the availability of a broad spectrum of treatment options that vary from active surveillance to surgical procedures, radiation therapies, and intensive multimodal and systemic approaches . This scenario has made it necessary to search for new therapeutic strategies and new combination treatments. One of the most promising strategies involves the DNA-damage response (DDR) pathway. DDR gene alteration creates a reliance on poly(adenosine diphosphate-ribose) polymerase (PARP)-1 for repairing DNA, which causes cancer cell death when PARP-1 is blocked . PARP inhibitors are a new class of targeted drugs developed recently, which offer a novel approach to treating PCa by utilizing mutations in germline and somatic DNA damage repair (DDR) pathways, which allows for a genetically stratified treatment strategy . This phenomenon called “synthetic lethality” is based on the theory that two different molecular pathways, which do not cause cell death when disrupted individually, can result in cell death when inhibited at the same time . Remarkably, PARPi in combination with ionizing radiation, has demonstrated the ability to enhance cellular radiosensitivity in different tumors . It is known that all kinds of radiation have an effect on the exposed biological systems that may be positive or negative . Radiotherapy uses ionizing radiation (IR) from low linear energy transfer (LET) X-rays (photons) to treat tumors. However, this IR can cause acute and long-term adverse events due to the irradiation of surrounding tissues . The aim of the radiation therapy is to kill the tumors cells while preserving the nearby healthy ones . This objective is not always easy to achieve, as there is a permanent interplay between ionizing radiation (IR) and biological/cellular elements that can transpire in two primary ways: direct ionization or excitation of large molecules like DNA, or more commonly, indirect initiation via the breakdown of water into reactive oxygen species . Among these, hydroxyl radicals are particularly prominent, as they can later engage with neighboring large molecules. The critical cellular target that drives tumor cell killing is DNA . Indeed, for low-LET radiation, 1 Gy dose produces approximately 1000 DNA single strand breaks (SSBs), 40 DNA double strand breaks (DSBs), and 1300 DNA base lesions . However, this DNA damage caused by the IR does not always lead tumor cells to die, as DNA has a sophisticated signaling network that is able to detect DNA damage and consequently initiate a complex repair process . It is precisely here that PARPi play their key role by blocking the DNA repair mechanism and therefore maintaining DNA damage, which finally drives tumor cells to death . The rationale is that exposure to radiation leads to both physical and biochemical damage to DNA, prompting cells to initiate three primary mechanisms for DNA repair. Two of the repair pathways are double-stranded DNA breaks (DSB): non-homologous end-joining (NHEJ) and homologous recombination (HR). The third one is a base excision repair (BER), which is a single-stranded DNA break (SSB) that occurs more frequently in the context of external beam radiotherapy and is the only possible mechanism of repair in BRCA-mutated cells. SSBs are the most common DNA lesions and are relatively easily repaired, while DSBs represent a higher threat to genome integrity as they are far more difficult to repair . However, sometimes SSBs cannot be adequately repaired and are converted to DSBs, which are highly mutagenic and cytotoxic when left unrepaired, interfering with important cellular processes and survival . Regarding DSBs, the NHEJ pathway is responsible for mending the majority of lesions that have two ends. However, when DNA replication forks collapse in the S phase and create DSBs with only one end, NHEJ becomes hazardous due to its potential to create chromosomal rearrangements by reconnecting DNA ends from distinct chromosomes. Consequently, NHEJ (also known as error-prone) is deliberately restrained at replication forks through elements of the secondary major DSB repair pathway, HR . In that sense, PARP1 contributes to the HR pathway of DSB repair by promoting rapid recruitment of MRE11, EXO1, BRCA1, and BRCA2 to DNA damage sites . Additionally, PARP1 counters NHEJ (the alternative pathway for DSB repair) by inhibiting the attachment of the NHEJ protein Ku to the ends of DNA, which initiates the NHEJ repair mechanism . Regarding SSB repair, PARP1 aids in the recruitment of the scaffold protein XRCC1 to the sites of DNA damage to repair it in a mechanism known as BER . Together, this explains why PARP inhibitors, acting against the DDR pathway, enhance the radiotherapy effects that provoke DNA damage Furthermore, PARP inhibitors destabilize replication forks via PARP DNA entrapment and induce cell death via replication stress-induced mitotic catastrophe . PARP1 interacts with DNA replication machinery during S phase, and in response to replication stress, that leads to uncoupling between DNA polymerase and helicase activities, which generates single-stranded DNA (ssDNA) . When this occur, RPA binds ssDNA and recruits the S/G2 checkpoint kinase ATR to induce cell cycle arrest . Thus, replication checkpoints prevent accumulation of ssDNA and exhaustion of RPA and thereby safeguard against fork breakage . In response to replication stress, PARP1 decelerates the progression of replication forks to facilitate the reversal of forks by counteracting the RECQ1 helicase . It safeguards replication forks against deterioration caused by the MRE11 nuclease , reinforces the stability of RAD51 nucleofilaments at paused forks in conjunction with PARP2 , and triggers the activation of the S-phase checkpoint kinase CHK1 . Finally, PARP1 also regulates replication and DNA repair at the transcription level by stimulating the activity of the transcription factor E2F1, which regulates the expression of replication and HR genes . In this scenario, PARPi can serve as radiosensitizers, driving tumor cells to death, blocking the reparation of damaged DNA caused by radiotherapy by leveraging the BER pathway, heightening the likelihood of replication forks collapsing that leads to the formation of persistent DSBs and inhibiting the HR and NHEJ repair pathways . The objective of this paper is to perform a systematic review of the current evidence regarding the use of PARPi and radiotherapy (RT) in PCa and to give future insight on this topic.
2.1. Search Strategy In May 2023, we conducted a systematic literature search through PubMed, Scopus, and Web of Science databases using the PICO criteria : P (Population): Prostate cancer cells, xenografts, or patients; I (Intervention): Combinations of PARPi and radiotherapy; C (Comparator): No comparator was mandatory; O (Outcomes): Safety and oncological outcomes. We utilized a specific search strategy to gather relevant data and evaluated the quality of the studies using a standardized methodology. 2.2. Article Selection We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines . Two authors (I.R.B. and B.C.R.) independently screened the articles based on our inclusion and exclusion criteria, with disagreements resolved by a third author (I.O.G.). By following this method, we identified a large number of articles that afterwards went through a selection process . Study identification: Using the above-explained search strategy, we found 77 articles regarding the combination of PARPi and radiotherapy (RT) in prostate cancer. We identified 21 reviews, systematic reviews, and meta-analyses; 3 clinical trials (none of them randomized); and 53 original articles. Most of the original articles were preclinical studies. Screening: After duplicates were removed, 75 articles were screened by title and abstract. Eligibility: 18 records were assessed via screening of the full text. The inclusions were: (a) reviews, systematic reviews, meta-analyses, clinical trials, and original articles; (b) the combination of radiotherapy and PARPi for prostate cancer treatment. Among the exclusion criteria we defined were: (a) Non-English/Spanish texts; (b) editorials, comments, and letters; (c) non-prostate cancer tumors and; (d) drug and molecular radiotherapy. Study analysis: Finally, seven studies were selected for eligibility for the study analysis.
In May 2023, we conducted a systematic literature search through PubMed, Scopus, and Web of Science databases using the PICO criteria : P (Population): Prostate cancer cells, xenografts, or patients; I (Intervention): Combinations of PARPi and radiotherapy; C (Comparator): No comparator was mandatory; O (Outcomes): Safety and oncological outcomes. We utilized a specific search strategy to gather relevant data and evaluated the quality of the studies using a standardized methodology.
We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines . Two authors (I.R.B. and B.C.R.) independently screened the articles based on our inclusion and exclusion criteria, with disagreements resolved by a third author (I.O.G.). By following this method, we identified a large number of articles that afterwards went through a selection process . Study identification: Using the above-explained search strategy, we found 77 articles regarding the combination of PARPi and radiotherapy (RT) in prostate cancer. We identified 21 reviews, systematic reviews, and meta-analyses; 3 clinical trials (none of them randomized); and 53 original articles. Most of the original articles were preclinical studies. Screening: After duplicates were removed, 75 articles were screened by title and abstract. Eligibility: 18 records were assessed via screening of the full text. The inclusions were: (a) reviews, systematic reviews, meta-analyses, clinical trials, and original articles; (b) the combination of radiotherapy and PARPi for prostate cancer treatment. Among the exclusion criteria we defined were: (a) Non-English/Spanish texts; (b) editorials, comments, and letters; (c) non-prostate cancer tumors and; (d) drug and molecular radiotherapy. Study analysis: Finally, seven studies were selected for eligibility for the study analysis.
3.1. PARP Inhibitors in Prostate Cancer The two main repair mechanisms for double-stranded breaks (DSBs) are homologous recombination (HR) and non-homologous end joining repair (NHEJ) . The signaling pathway of the HR system is executed by the sequential recruitment of repair proteins into the chromatin surrounding the lesion. The first sensor of the DSBs is the MRN complex (MRE11-RAD50-NBS1), which is attached to both sides of the breaks to signal them. Subsequently, recruitment and accumulation of regulated proteins occur by a complex mechanism that employs phosphorylation and ubiquitination mediated by various kinases, including BRCA1 and CtIP . Thus, maintenance of the repair machinery is critical to protecting cells from DNA damage and preventing tumor processes . Poly(ADP-ribose) (PAR) is involved in different cellular processes such as DNA replication, transcription, repair, and cell death. PARPs are enzymes implicated in PAR synthesis. PARP-1 is a crucial sensor protein for DNA damage that activates signaling pathways that promote appropriate cellular responses and exhibits significantly increased catalytic activity leading to the induction of poly ADP-ribosylation (PARylation) . PARylation is a process that involves breaking down nicotinamide adenine dinucleotide (NAD+) and transferring the resulting ADP-ribose to either PARP-1 itself (autoPARylation) or other specific proteins (PARylation). These activities trigger PARP-1 and other DNA repair enzymes to start DNA repair processes by modifying the structure of chromatin and directing DNA repair factors to the site of damage . Both PARP-1 and PARP-2 facilitate the recruitment and activation of BER factors and consequently facilitate DNA single-strand break (SSB) repair. Moreover, PARP-1 participates in repairing DNA DSBs (through NHEJ and HR) and correcting DNA replication errors . On the other hand, BRCA2 mutations are recognized as significant risk factors for developing PCa. Homologous recombination repair (HRR) is a repair mechanism that depends on the BRCA1/2 genes. Consequently, tumor cells with deficient BRCA1/2 genes are unable to repair DNA damage through HRR and rely on PARP proteins for the restoration of single-strand breaks (SSBs). When PARP proteins are inhibited by PARPi, DNA repair cannot occur, leading to subsequent tumor cell death . Given this background, the utilization of PARPi in PCa is supported by two key factors: the elevated occurrence of genetic mutations in PCa and the synthetic lethality concept. Genetic mutations in PCa involve both germline and somatic alterations. Germline mutations impact all cells within the body and can provide valuable insights for genetic counseling. Somatic alterations, on the other hand, are exclusive to tumor cells and arise as a consequence of inherent genome instability within the tumor itself, as well as clonal selection triggered by prior treatments . In PCa, PARPi act through two mechanisms: (1) competitively binding to the active site, thereby preventing the repair of SSBs and favoring their conversion into DSBs and; (2) trapping PARP-1 onto the damaged DNA, inhibiting autoPARylation . Additionally, PARP-1 plays a role in delaying the progression of replication forks, which further impedes the repair of DSBs, ultimately leading to cell death . Together, they contribute to the accumulation of DNA double-strand breaks (DSBs) that Homologous recombination repair (HRR)-deficient cells cannot repair efficiently . Olaparib, rucaparib, niraparib, and talazoparib are PARPi with different mechanisms of action and distinctive trapping capacities that have been proven in PCa . Talazoparib is the one with the most trapping capacity, and rucaparib is the one with the least capacity. PROfound is a phase III clinical trial (CT) investigating Olaparib in patients with metastatic castration-resistant prostate cancer (mCRPC) and HRR gene mutations progressing after enzalutamide or abiraterone acetate plus prednisone (AA) (second-line setting). This CT demonstrated that patients treated with olaparib had significantly longer radiographic progression-free survival (rPFS) (5.8 months vs. 3.5 months, p < 0.001) and a higher objective response rate (ORR) (22% vs. 4%; odds ratio 5.93, 95% CI: 2.01–25.40), compared to the control group receiving enzalutamide or AA. However, with olaparib, there were more grade ≥ 3 adverse events (AEs) . Thanks to these results, the U.S. Food and Drug Administration (FDA) and now the European Medicines Agency (EMA) have approved the use of this drug in this clinical setting. Rucaparib also received authorization by the FDA after a single-arm phase II CT (TRITON-2), which showed an objective response rate (ORR) of 43.5% and a PSA response rate of 54.8% in BRCA1/2 mutated mCRPC patients who had progressed after new antiandrogen therapies and chemotherapy (from the third-line setting) . However, this approval is conditional on the results of the phase III study TRITON-3 (NCT02975934), and this drug has not yet been approved by the EMA. Two more PARPi are actually under evaluation by the FDA: niraparib and talazoparib. A phase II single-arm CT (GALAHAD) proved that niraparib in mCRPC patients that have progressed after paclitaxel and AR-targeted therapy could reach a 41% ORR, a 63% complete response rate (CRR), a median rPFS of 8.2 months, and an OS of 12.6 months in the BRCA1/2 mutant population . Another phase II CT (TALAPRO-1) evaluated Talazoparib in DDR-HRR mutated mCRPC patients that progressed after chemotherapy and demonstrated a general ORR of 29.8% and a 46% ORR in BRCA1/2 mutated patients . Recently, some phase III clinical trials have shown the benefit of the combination of a PARPi with a new antiandrogen in the first-line setting of mCRPC. The PROpel trial demonstrated an improvement in rPFS (HR 0.66 [95% CI 0.54–0.81]) of the combination of olaparib and abiraterone compared to abiraterone alone, irrespective of the HRR mutation status. On the other hand, preliminary results from MAGNITUDE showed the benefit of the combination of Niraparib and Abiraterone, with an improvement in the rPFS (relative risk 0.53, 95% CI: 0.36–0.79, p = 0.0014) and a reduction in the risk of disease progression/death (47% vs. 27%) in mCRPC with alterations in genes associated with HRR . More clinical trials are currently ongoing with new combination regimens and PCa settings (available at ClinicalTrials.gov). 3.2. PARPi as Radiosensitizers in Prostate Cancer Radiosensitivity has been described based on several factors, including the inherent radiosensitivity of the tumor, its repair capacity, the reoxygenation process, cell cycle redistribution, tumoral tissue repopulation, tumor immunity, and vascular endothelial damage. The combination of ionizing radiation with radio-enhancing agents presents an opportunity to enhance the effectiveness of radiotherapy while minimizing potential damage to healthy tissues and reducing toxic side effects. PARP inhibitors (PARPi) possess several qualities that make them suitable for exerting radiosensitizing effects . 3.2.1. The Rationale to Combine PARPi and Radiotherapy The primary mechanism through which radiation therapy (RT) induces cell death is by causing various types of DNA damage, with double-strand breaks (DSBs) being the most harmful. Therefore, the sensitivity of tumors to ionizing radiation (IR) is closely related to their capacity to repair DSBs . These breaks are repaired through NHEJ and HR . The first one operates throughout all cell cycles, although under high DNA damage levels it does not work properly. HR, however, could occur only in S and G2 phases, as it is the only moment with a sister chromatid available . Following the occurrence of double-strand breaks (DSBs), a rapid phosphorylation of H2AX at S139 takes place, resulting in the formation of γH2AX. This modification serves as a marker for the chromatin surrounding the DSB site and plays a crucial role in facilitating the recruitment of various DNA damage response factors, including 53BP1 . In PCa, the detection of γH2AX foci has been utilized as a predictive indicator for radiosensitivity . Targeting one of these DSB repair pathways can greatly enhance radiosensitivity. However, the challenge lies in minimizing the cytotoxic effect on normal cells while specifically targeting the tumor. BRCA2-deficient cells have a defective HR, so they are forced to activate other DNA repair pathways, such as BER, that are responsible for high-fidelity DDR. In this regard, inhibition of PARP1, which is responsible for BER, may radiosensitize HR-deficient cells. In addition, PARP1 is involved in BER to repair SSBs, and although IR induces numerous SSBs, their efficient repair usually does not lead to substantial cell death. However, PARP1 inhibition, leaves SSBs unrepaired and, together with replication forks, could generate one-ended DSBs. These specific DSBs are predominantly repaired through HR. As a result, PARP inhibition not only exhibits toxicity as a monotherapy but also enhances the radiosensitivity of HR-deficient cells . Therefore, the combination of PARPi and radiation emerges as a possible treatment based on the ability of PARPi to amplify unrepaired DNA damage . This ability of PARPi to enhance radiosensitivity has been extensively studied in tumors with BRCA1/2 deficiencies, particularly in hereditary breast and ovarian cancers . However, unlike other tumor types, PCa genomes rarely contain BRCA1/2 mutations. Additionally, there is not enough information to date to consider cellular or molecular profiling to personalize treatment with PARPi in PCa. Hence, it is crucial to explore other more prevalent genetic abnormalities that render PCa susceptible to the radiosensitizing effects of PARP inhibitors. One of these mechanisms is the erroneous alternative end-joining (Alt-EJ) pathway, where PARP-1 was found to be crucial. In this context, PARPi can intensify cell death by suppressing homologous recombination and promoting error-prone alt-NHEJ. As a consequence, PARPi might also radiosensitize tumor cells with non-HR deficiencies by suppressing PARP-1-dependent c-NHEJ . This evidence supports the idea that PARPi are potent radiosensitizers, and their combination with radiotherapy may increase oncological outcomes thanks to their probable synergic effect. This combination has been proven effective in other tumors such as breast cancer , colorectal cancer , pancreatic cancer , lung cancer , and head and neck cancer . In lung cancer, for example, a phase I CT for the combination of Olaparib + RT with or without cisplatin showed that the maximum tolerated dose (MTD) of Olaparib with RT was 25 mg/24 h, markedly lower than had been anticipated, which emphasizes the potent radiosensitizing properties of olaparib . Regarding breast cancer, a phase I CT suggested that PARP inhibition with olaparib concurrently with radiotherapy for early-stage, high-risk triple-negative breast cancer is well tolerated with no late treatment-related grade 3 or greater toxic adverse events. Three-year overall survival (OS) and event-free survival (EFS) were 83% (95% CI, 70–100%) and 65% (95% CI, 48–88%), respectively. Homologous recombination status was not associated with OS or EFS . 3.2.2. Mechanism of Radiosensitization of PARPi There are different mechanisms described by which PARPi can enhance radiosensitivity in tumors : Inhibition of DNA repair: when PARPi are combined with RT, the reparation of SSB is compromised, which leads the cell to DNA replication fork collapse and the appearance of DSB that cause cell death. In addition, PARPi also induce “mechanical” replication fork collapse and consequently DSB. This effect is more potent in BRCA-mutated or BRCAness cells that have a deficient HR. This is an example of a synthetic lethality mechanism . G2/M arrest: when DNA damage occurs, common cells activate checkpoints that lead to cell cycle arrest . PARPi have the capacity to arrest cells in the part of the cycle when they are more sensitive to the radiotherapy effect: in the G2 and M phases . That mechanism enhances RT by maintaining the cell longer in the most radiosensitive phases. Modulation of chromatin remodeling: PARP-1 inhibition could delay DNA double strand opening and therefore DNA repair , favoring the DNA damage caused by the RT. Replication-dependent radiosensitization: PARPi show their radiosensitizing effect mostly during the cell cycle phase S . Tumors have a higher proliferation rate than the surrounding tissues and therefore more cells in phase S, which help radiosensitize the tumor while saving the surrounding structures. Impact on the microenvironment and role of hypoxia: Hypoxia induces radioresistance. PARPi show similarities to nicotinamide, a vasodilator, which could help bypass this hypoxia radioresistance . 3.2.3. The Combination of PARPi and Radiotherapy in Prostate Cancer: Preclinical Studies The combination of RT and PARPi is a promising strategy to enhance DNA damage in tumors. Following this idea, some preclinical studies in prostate cancer have shown that novel agents targeting the DNA repair pathway may help increase the efficacy of irradiation while minimizing potential side effects . Han et al. first proved in 2013 that radiation resistance triggered by ERG overexpression increased the efficiency of DNA repair with an amplified expression of γ-H2AX, which could be reversed via PARP1 inhibition. They demonstrated that Olaparib radiosensitized ERG-positive cells by a factor of 1.52 (±0.03) in comparison to ERG-negative cells . In 2015, Gani et al. demonstrated in vitro that AZD-2281 (Olaparib) sensitized 22Rv1 cells to radiation, both under normal oxygen conditions (oxia) and in the presence of acute and chronic hypoxia. In addition, they performed an in vivo study where they showed that combining AZD-2281 with fractionated RT led to a significant delay in tumor growth and increased clonogenic cell death without increasing gut toxicity . Mansour et al. (2017) proved that PTEN plays a role in the repair of DNA double-strand breaks (DSBs) through homologous recombination (HR), as evidenced by increased sensitivity to Olaparib. Their findings showed that while the loss of PTEN is associated with a poorer prognosis in PCa, it may actually indicate a better response to radiotherapy. Additionally, they presented evidence suggesting that PTEN can serve as a biomarker for predicting the response to PARPi as radiosensitizing agents. These findings collectively suggest that PTEN is involved in maintaining genomic stability by delaying the progression of damaged cells into the G2/M phase, thereby providing time for HR-mediated repair of DSBs. Moreover, they identified the PTEN status in PCa as a potential predictor of both radiotherapy and PARPi response, alone or in combination . In 2018, Van de Ven et al. showed that cells resistant to irradiation and tumors derived from a PTEN/Trp53-deficient mouse model of advanced PCa exhibited increased sensitivity to radiation after being treated with NanoOlaparib, a lipid-based injectable nanoformulation of Olaparib. This radiosensitivity was accompanied by changes in the expression of γ-H2AX, which were dependent on the radiation dose and specific to NanoOlaparib. In animals, the combination of NanoOlaparib and radiation tripled the median mouse overall survival (OS) when compared with RT alone, and up to 50% of mice achieved a complete response after 13 weeks . In the same year, Oing et al. reported that BCL2 inhibited the NHEJ repair of DSBs by sequestering the KU80 protein outside the nucleus. They also found that this effect is linked to a shift in DNA repair mechanisms towards error-prone PARP1-dependent end-joining (PARP1-EJ). To support this, they provided in vitro evidence that targeting this repair switch using a PARPi (Olaparib) could selectively enhance the radiosensitivity of cells overexpressing BCL2, offering a promising therapeutic approach. They also corroborated these findings by evaluating retrospectively the impact of BCL2 expression on the clinical outcomes of patients who had been given RT after radical prostatectomy (RP) . With this background, Köcher et al. introduced a functional assay in freshly collected tumor samples from PCa patients that enables the identification of the repair switch to the alternative PARP1-EJ pathway. They demonstrated that an ex vivo assay could be used to detect radiosensitivity in tumor biopsies, helping to personalize treatments . Most recently, Fan et al. demonstrated in LNCaP cells that loss of RB1 enhanced RT DNA damage, inhibiting cell proliferation and provoking cellular senescence through a TP53-dependent pathway. However, when TP53 and RB1 are both deleted, cell proliferation is increased, which facilitates the appearance of castration resistance and RT resistance. Nevertheless, when combined with a PARP1 inhibitor, radiosensitivity was restored . To sum up, all these preclinical trials have shown that the use of PARPi blocking the DDR pathway in combination with RT enhances tumor cell death, as they are not able to repair the damaged DNA caused by RT. 3.2.4. The Combination of PARPi and Radiotherapy in Prostate Cancer: Clinical Studies Different randomized clinical trials have proven oncological benefits from the combination of RT and ADT in high-risk and locally advanced PC . However, even with this treatment, approximately 50% of them will experience biochemical recurrence , indicating that better therapeutic regimens are needed. On the other hand, different studies have shown benefits when combining PARPi and ADT, as PARP-1 inhibition suppresses the growth of AR-positive PCa cells. Thus, targeting PARP-1 in PCa seems promising, given that both DNA repair and AR-mediated transcription depend on PARP-1 function . Finally, as shown before, PARPi have exhibited their capacity to radiosensitize tumors in PCa preclinical studies . Together, this opens the research question of combining PARPi, RT, and ADT as a triple therapy. To clinically establish the potential synergy between PARPi, RT, and ADT, an ongoing phase II randomized CT known as NADIR (NCT04037254) is currently investigating this approach . In this trial, 170–180 men with localized high-risk PCa will be enrolled. All patients will receive DE-IMRT and 24 months of ADT and will be randomized to receive or not niraparib for 12 months. The primary endpoint is the proportion of patients with a PSA under 0.1ng/mL after the end of the treatment. The results of this trial, which are still pending, could potentially open up new horizons for the treatment of high-risk PCa.
The two main repair mechanisms for double-stranded breaks (DSBs) are homologous recombination (HR) and non-homologous end joining repair (NHEJ) . The signaling pathway of the HR system is executed by the sequential recruitment of repair proteins into the chromatin surrounding the lesion. The first sensor of the DSBs is the MRN complex (MRE11-RAD50-NBS1), which is attached to both sides of the breaks to signal them. Subsequently, recruitment and accumulation of regulated proteins occur by a complex mechanism that employs phosphorylation and ubiquitination mediated by various kinases, including BRCA1 and CtIP . Thus, maintenance of the repair machinery is critical to protecting cells from DNA damage and preventing tumor processes . Poly(ADP-ribose) (PAR) is involved in different cellular processes such as DNA replication, transcription, repair, and cell death. PARPs are enzymes implicated in PAR synthesis. PARP-1 is a crucial sensor protein for DNA damage that activates signaling pathways that promote appropriate cellular responses and exhibits significantly increased catalytic activity leading to the induction of poly ADP-ribosylation (PARylation) . PARylation is a process that involves breaking down nicotinamide adenine dinucleotide (NAD+) and transferring the resulting ADP-ribose to either PARP-1 itself (autoPARylation) or other specific proteins (PARylation). These activities trigger PARP-1 and other DNA repair enzymes to start DNA repair processes by modifying the structure of chromatin and directing DNA repair factors to the site of damage . Both PARP-1 and PARP-2 facilitate the recruitment and activation of BER factors and consequently facilitate DNA single-strand break (SSB) repair. Moreover, PARP-1 participates in repairing DNA DSBs (through NHEJ and HR) and correcting DNA replication errors . On the other hand, BRCA2 mutations are recognized as significant risk factors for developing PCa. Homologous recombination repair (HRR) is a repair mechanism that depends on the BRCA1/2 genes. Consequently, tumor cells with deficient BRCA1/2 genes are unable to repair DNA damage through HRR and rely on PARP proteins for the restoration of single-strand breaks (SSBs). When PARP proteins are inhibited by PARPi, DNA repair cannot occur, leading to subsequent tumor cell death . Given this background, the utilization of PARPi in PCa is supported by two key factors: the elevated occurrence of genetic mutations in PCa and the synthetic lethality concept. Genetic mutations in PCa involve both germline and somatic alterations. Germline mutations impact all cells within the body and can provide valuable insights for genetic counseling. Somatic alterations, on the other hand, are exclusive to tumor cells and arise as a consequence of inherent genome instability within the tumor itself, as well as clonal selection triggered by prior treatments . In PCa, PARPi act through two mechanisms: (1) competitively binding to the active site, thereby preventing the repair of SSBs and favoring their conversion into DSBs and; (2) trapping PARP-1 onto the damaged DNA, inhibiting autoPARylation . Additionally, PARP-1 plays a role in delaying the progression of replication forks, which further impedes the repair of DSBs, ultimately leading to cell death . Together, they contribute to the accumulation of DNA double-strand breaks (DSBs) that Homologous recombination repair (HRR)-deficient cells cannot repair efficiently . Olaparib, rucaparib, niraparib, and talazoparib are PARPi with different mechanisms of action and distinctive trapping capacities that have been proven in PCa . Talazoparib is the one with the most trapping capacity, and rucaparib is the one with the least capacity. PROfound is a phase III clinical trial (CT) investigating Olaparib in patients with metastatic castration-resistant prostate cancer (mCRPC) and HRR gene mutations progressing after enzalutamide or abiraterone acetate plus prednisone (AA) (second-line setting). This CT demonstrated that patients treated with olaparib had significantly longer radiographic progression-free survival (rPFS) (5.8 months vs. 3.5 months, p < 0.001) and a higher objective response rate (ORR) (22% vs. 4%; odds ratio 5.93, 95% CI: 2.01–25.40), compared to the control group receiving enzalutamide or AA. However, with olaparib, there were more grade ≥ 3 adverse events (AEs) . Thanks to these results, the U.S. Food and Drug Administration (FDA) and now the European Medicines Agency (EMA) have approved the use of this drug in this clinical setting. Rucaparib also received authorization by the FDA after a single-arm phase II CT (TRITON-2), which showed an objective response rate (ORR) of 43.5% and a PSA response rate of 54.8% in BRCA1/2 mutated mCRPC patients who had progressed after new antiandrogen therapies and chemotherapy (from the third-line setting) . However, this approval is conditional on the results of the phase III study TRITON-3 (NCT02975934), and this drug has not yet been approved by the EMA. Two more PARPi are actually under evaluation by the FDA: niraparib and talazoparib. A phase II single-arm CT (GALAHAD) proved that niraparib in mCRPC patients that have progressed after paclitaxel and AR-targeted therapy could reach a 41% ORR, a 63% complete response rate (CRR), a median rPFS of 8.2 months, and an OS of 12.6 months in the BRCA1/2 mutant population . Another phase II CT (TALAPRO-1) evaluated Talazoparib in DDR-HRR mutated mCRPC patients that progressed after chemotherapy and demonstrated a general ORR of 29.8% and a 46% ORR in BRCA1/2 mutated patients . Recently, some phase III clinical trials have shown the benefit of the combination of a PARPi with a new antiandrogen in the first-line setting of mCRPC. The PROpel trial demonstrated an improvement in rPFS (HR 0.66 [95% CI 0.54–0.81]) of the combination of olaparib and abiraterone compared to abiraterone alone, irrespective of the HRR mutation status. On the other hand, preliminary results from MAGNITUDE showed the benefit of the combination of Niraparib and Abiraterone, with an improvement in the rPFS (relative risk 0.53, 95% CI: 0.36–0.79, p = 0.0014) and a reduction in the risk of disease progression/death (47% vs. 27%) in mCRPC with alterations in genes associated with HRR . More clinical trials are currently ongoing with new combination regimens and PCa settings (available at ClinicalTrials.gov).
Radiosensitivity has been described based on several factors, including the inherent radiosensitivity of the tumor, its repair capacity, the reoxygenation process, cell cycle redistribution, tumoral tissue repopulation, tumor immunity, and vascular endothelial damage. The combination of ionizing radiation with radio-enhancing agents presents an opportunity to enhance the effectiveness of radiotherapy while minimizing potential damage to healthy tissues and reducing toxic side effects. PARP inhibitors (PARPi) possess several qualities that make them suitable for exerting radiosensitizing effects . 3.2.1. The Rationale to Combine PARPi and Radiotherapy The primary mechanism through which radiation therapy (RT) induces cell death is by causing various types of DNA damage, with double-strand breaks (DSBs) being the most harmful. Therefore, the sensitivity of tumors to ionizing radiation (IR) is closely related to their capacity to repair DSBs . These breaks are repaired through NHEJ and HR . The first one operates throughout all cell cycles, although under high DNA damage levels it does not work properly. HR, however, could occur only in S and G2 phases, as it is the only moment with a sister chromatid available . Following the occurrence of double-strand breaks (DSBs), a rapid phosphorylation of H2AX at S139 takes place, resulting in the formation of γH2AX. This modification serves as a marker for the chromatin surrounding the DSB site and plays a crucial role in facilitating the recruitment of various DNA damage response factors, including 53BP1 . In PCa, the detection of γH2AX foci has been utilized as a predictive indicator for radiosensitivity . Targeting one of these DSB repair pathways can greatly enhance radiosensitivity. However, the challenge lies in minimizing the cytotoxic effect on normal cells while specifically targeting the tumor. BRCA2-deficient cells have a defective HR, so they are forced to activate other DNA repair pathways, such as BER, that are responsible for high-fidelity DDR. In this regard, inhibition of PARP1, which is responsible for BER, may radiosensitize HR-deficient cells. In addition, PARP1 is involved in BER to repair SSBs, and although IR induces numerous SSBs, their efficient repair usually does not lead to substantial cell death. However, PARP1 inhibition, leaves SSBs unrepaired and, together with replication forks, could generate one-ended DSBs. These specific DSBs are predominantly repaired through HR. As a result, PARP inhibition not only exhibits toxicity as a monotherapy but also enhances the radiosensitivity of HR-deficient cells . Therefore, the combination of PARPi and radiation emerges as a possible treatment based on the ability of PARPi to amplify unrepaired DNA damage . This ability of PARPi to enhance radiosensitivity has been extensively studied in tumors with BRCA1/2 deficiencies, particularly in hereditary breast and ovarian cancers . However, unlike other tumor types, PCa genomes rarely contain BRCA1/2 mutations. Additionally, there is not enough information to date to consider cellular or molecular profiling to personalize treatment with PARPi in PCa. Hence, it is crucial to explore other more prevalent genetic abnormalities that render PCa susceptible to the radiosensitizing effects of PARP inhibitors. One of these mechanisms is the erroneous alternative end-joining (Alt-EJ) pathway, where PARP-1 was found to be crucial. In this context, PARPi can intensify cell death by suppressing homologous recombination and promoting error-prone alt-NHEJ. As a consequence, PARPi might also radiosensitize tumor cells with non-HR deficiencies by suppressing PARP-1-dependent c-NHEJ . This evidence supports the idea that PARPi are potent radiosensitizers, and their combination with radiotherapy may increase oncological outcomes thanks to their probable synergic effect. This combination has been proven effective in other tumors such as breast cancer , colorectal cancer , pancreatic cancer , lung cancer , and head and neck cancer . In lung cancer, for example, a phase I CT for the combination of Olaparib + RT with or without cisplatin showed that the maximum tolerated dose (MTD) of Olaparib with RT was 25 mg/24 h, markedly lower than had been anticipated, which emphasizes the potent radiosensitizing properties of olaparib . Regarding breast cancer, a phase I CT suggested that PARP inhibition with olaparib concurrently with radiotherapy for early-stage, high-risk triple-negative breast cancer is well tolerated with no late treatment-related grade 3 or greater toxic adverse events. Three-year overall survival (OS) and event-free survival (EFS) were 83% (95% CI, 70–100%) and 65% (95% CI, 48–88%), respectively. Homologous recombination status was not associated with OS or EFS . 3.2.2. Mechanism of Radiosensitization of PARPi There are different mechanisms described by which PARPi can enhance radiosensitivity in tumors : Inhibition of DNA repair: when PARPi are combined with RT, the reparation of SSB is compromised, which leads the cell to DNA replication fork collapse and the appearance of DSB that cause cell death. In addition, PARPi also induce “mechanical” replication fork collapse and consequently DSB. This effect is more potent in BRCA-mutated or BRCAness cells that have a deficient HR. This is an example of a synthetic lethality mechanism . G2/M arrest: when DNA damage occurs, common cells activate checkpoints that lead to cell cycle arrest . PARPi have the capacity to arrest cells in the part of the cycle when they are more sensitive to the radiotherapy effect: in the G2 and M phases . That mechanism enhances RT by maintaining the cell longer in the most radiosensitive phases. Modulation of chromatin remodeling: PARP-1 inhibition could delay DNA double strand opening and therefore DNA repair , favoring the DNA damage caused by the RT. Replication-dependent radiosensitization: PARPi show their radiosensitizing effect mostly during the cell cycle phase S . Tumors have a higher proliferation rate than the surrounding tissues and therefore more cells in phase S, which help radiosensitize the tumor while saving the surrounding structures. Impact on the microenvironment and role of hypoxia: Hypoxia induces radioresistance. PARPi show similarities to nicotinamide, a vasodilator, which could help bypass this hypoxia radioresistance . 3.2.3. The Combination of PARPi and Radiotherapy in Prostate Cancer: Preclinical Studies The combination of RT and PARPi is a promising strategy to enhance DNA damage in tumors. Following this idea, some preclinical studies in prostate cancer have shown that novel agents targeting the DNA repair pathway may help increase the efficacy of irradiation while minimizing potential side effects . Han et al. first proved in 2013 that radiation resistance triggered by ERG overexpression increased the efficiency of DNA repair with an amplified expression of γ-H2AX, which could be reversed via PARP1 inhibition. They demonstrated that Olaparib radiosensitized ERG-positive cells by a factor of 1.52 (±0.03) in comparison to ERG-negative cells . In 2015, Gani et al. demonstrated in vitro that AZD-2281 (Olaparib) sensitized 22Rv1 cells to radiation, both under normal oxygen conditions (oxia) and in the presence of acute and chronic hypoxia. In addition, they performed an in vivo study where they showed that combining AZD-2281 with fractionated RT led to a significant delay in tumor growth and increased clonogenic cell death without increasing gut toxicity . Mansour et al. (2017) proved that PTEN plays a role in the repair of DNA double-strand breaks (DSBs) through homologous recombination (HR), as evidenced by increased sensitivity to Olaparib. Their findings showed that while the loss of PTEN is associated with a poorer prognosis in PCa, it may actually indicate a better response to radiotherapy. Additionally, they presented evidence suggesting that PTEN can serve as a biomarker for predicting the response to PARPi as radiosensitizing agents. These findings collectively suggest that PTEN is involved in maintaining genomic stability by delaying the progression of damaged cells into the G2/M phase, thereby providing time for HR-mediated repair of DSBs. Moreover, they identified the PTEN status in PCa as a potential predictor of both radiotherapy and PARPi response, alone or in combination . In 2018, Van de Ven et al. showed that cells resistant to irradiation and tumors derived from a PTEN/Trp53-deficient mouse model of advanced PCa exhibited increased sensitivity to radiation after being treated with NanoOlaparib, a lipid-based injectable nanoformulation of Olaparib. This radiosensitivity was accompanied by changes in the expression of γ-H2AX, which were dependent on the radiation dose and specific to NanoOlaparib. In animals, the combination of NanoOlaparib and radiation tripled the median mouse overall survival (OS) when compared with RT alone, and up to 50% of mice achieved a complete response after 13 weeks . In the same year, Oing et al. reported that BCL2 inhibited the NHEJ repair of DSBs by sequestering the KU80 protein outside the nucleus. They also found that this effect is linked to a shift in DNA repair mechanisms towards error-prone PARP1-dependent end-joining (PARP1-EJ). To support this, they provided in vitro evidence that targeting this repair switch using a PARPi (Olaparib) could selectively enhance the radiosensitivity of cells overexpressing BCL2, offering a promising therapeutic approach. They also corroborated these findings by evaluating retrospectively the impact of BCL2 expression on the clinical outcomes of patients who had been given RT after radical prostatectomy (RP) . With this background, Köcher et al. introduced a functional assay in freshly collected tumor samples from PCa patients that enables the identification of the repair switch to the alternative PARP1-EJ pathway. They demonstrated that an ex vivo assay could be used to detect radiosensitivity in tumor biopsies, helping to personalize treatments . Most recently, Fan et al. demonstrated in LNCaP cells that loss of RB1 enhanced RT DNA damage, inhibiting cell proliferation and provoking cellular senescence through a TP53-dependent pathway. However, when TP53 and RB1 are both deleted, cell proliferation is increased, which facilitates the appearance of castration resistance and RT resistance. Nevertheless, when combined with a PARP1 inhibitor, radiosensitivity was restored . To sum up, all these preclinical trials have shown that the use of PARPi blocking the DDR pathway in combination with RT enhances tumor cell death, as they are not able to repair the damaged DNA caused by RT. 3.2.4. The Combination of PARPi and Radiotherapy in Prostate Cancer: Clinical Studies Different randomized clinical trials have proven oncological benefits from the combination of RT and ADT in high-risk and locally advanced PC . However, even with this treatment, approximately 50% of them will experience biochemical recurrence , indicating that better therapeutic regimens are needed. On the other hand, different studies have shown benefits when combining PARPi and ADT, as PARP-1 inhibition suppresses the growth of AR-positive PCa cells. Thus, targeting PARP-1 in PCa seems promising, given that both DNA repair and AR-mediated transcription depend on PARP-1 function . Finally, as shown before, PARPi have exhibited their capacity to radiosensitize tumors in PCa preclinical studies . Together, this opens the research question of combining PARPi, RT, and ADT as a triple therapy. To clinically establish the potential synergy between PARPi, RT, and ADT, an ongoing phase II randomized CT known as NADIR (NCT04037254) is currently investigating this approach . In this trial, 170–180 men with localized high-risk PCa will be enrolled. All patients will receive DE-IMRT and 24 months of ADT and will be randomized to receive or not niraparib for 12 months. The primary endpoint is the proportion of patients with a PSA under 0.1ng/mL after the end of the treatment. The results of this trial, which are still pending, could potentially open up new horizons for the treatment of high-risk PCa.
The primary mechanism through which radiation therapy (RT) induces cell death is by causing various types of DNA damage, with double-strand breaks (DSBs) being the most harmful. Therefore, the sensitivity of tumors to ionizing radiation (IR) is closely related to their capacity to repair DSBs . These breaks are repaired through NHEJ and HR . The first one operates throughout all cell cycles, although under high DNA damage levels it does not work properly. HR, however, could occur only in S and G2 phases, as it is the only moment with a sister chromatid available . Following the occurrence of double-strand breaks (DSBs), a rapid phosphorylation of H2AX at S139 takes place, resulting in the formation of γH2AX. This modification serves as a marker for the chromatin surrounding the DSB site and plays a crucial role in facilitating the recruitment of various DNA damage response factors, including 53BP1 . In PCa, the detection of γH2AX foci has been utilized as a predictive indicator for radiosensitivity . Targeting one of these DSB repair pathways can greatly enhance radiosensitivity. However, the challenge lies in minimizing the cytotoxic effect on normal cells while specifically targeting the tumor. BRCA2-deficient cells have a defective HR, so they are forced to activate other DNA repair pathways, such as BER, that are responsible for high-fidelity DDR. In this regard, inhibition of PARP1, which is responsible for BER, may radiosensitize HR-deficient cells. In addition, PARP1 is involved in BER to repair SSBs, and although IR induces numerous SSBs, their efficient repair usually does not lead to substantial cell death. However, PARP1 inhibition, leaves SSBs unrepaired and, together with replication forks, could generate one-ended DSBs. These specific DSBs are predominantly repaired through HR. As a result, PARP inhibition not only exhibits toxicity as a monotherapy but also enhances the radiosensitivity of HR-deficient cells . Therefore, the combination of PARPi and radiation emerges as a possible treatment based on the ability of PARPi to amplify unrepaired DNA damage . This ability of PARPi to enhance radiosensitivity has been extensively studied in tumors with BRCA1/2 deficiencies, particularly in hereditary breast and ovarian cancers . However, unlike other tumor types, PCa genomes rarely contain BRCA1/2 mutations. Additionally, there is not enough information to date to consider cellular or molecular profiling to personalize treatment with PARPi in PCa. Hence, it is crucial to explore other more prevalent genetic abnormalities that render PCa susceptible to the radiosensitizing effects of PARP inhibitors. One of these mechanisms is the erroneous alternative end-joining (Alt-EJ) pathway, where PARP-1 was found to be crucial. In this context, PARPi can intensify cell death by suppressing homologous recombination and promoting error-prone alt-NHEJ. As a consequence, PARPi might also radiosensitize tumor cells with non-HR deficiencies by suppressing PARP-1-dependent c-NHEJ . This evidence supports the idea that PARPi are potent radiosensitizers, and their combination with radiotherapy may increase oncological outcomes thanks to their probable synergic effect. This combination has been proven effective in other tumors such as breast cancer , colorectal cancer , pancreatic cancer , lung cancer , and head and neck cancer . In lung cancer, for example, a phase I CT for the combination of Olaparib + RT with or without cisplatin showed that the maximum tolerated dose (MTD) of Olaparib with RT was 25 mg/24 h, markedly lower than had been anticipated, which emphasizes the potent radiosensitizing properties of olaparib . Regarding breast cancer, a phase I CT suggested that PARP inhibition with olaparib concurrently with radiotherapy for early-stage, high-risk triple-negative breast cancer is well tolerated with no late treatment-related grade 3 or greater toxic adverse events. Three-year overall survival (OS) and event-free survival (EFS) were 83% (95% CI, 70–100%) and 65% (95% CI, 48–88%), respectively. Homologous recombination status was not associated with OS or EFS .
There are different mechanisms described by which PARPi can enhance radiosensitivity in tumors : Inhibition of DNA repair: when PARPi are combined with RT, the reparation of SSB is compromised, which leads the cell to DNA replication fork collapse and the appearance of DSB that cause cell death. In addition, PARPi also induce “mechanical” replication fork collapse and consequently DSB. This effect is more potent in BRCA-mutated or BRCAness cells that have a deficient HR. This is an example of a synthetic lethality mechanism . G2/M arrest: when DNA damage occurs, common cells activate checkpoints that lead to cell cycle arrest . PARPi have the capacity to arrest cells in the part of the cycle when they are more sensitive to the radiotherapy effect: in the G2 and M phases . That mechanism enhances RT by maintaining the cell longer in the most radiosensitive phases. Modulation of chromatin remodeling: PARP-1 inhibition could delay DNA double strand opening and therefore DNA repair , favoring the DNA damage caused by the RT. Replication-dependent radiosensitization: PARPi show their radiosensitizing effect mostly during the cell cycle phase S . Tumors have a higher proliferation rate than the surrounding tissues and therefore more cells in phase S, which help radiosensitize the tumor while saving the surrounding structures. Impact on the microenvironment and role of hypoxia: Hypoxia induces radioresistance. PARPi show similarities to nicotinamide, a vasodilator, which could help bypass this hypoxia radioresistance .
The combination of RT and PARPi is a promising strategy to enhance DNA damage in tumors. Following this idea, some preclinical studies in prostate cancer have shown that novel agents targeting the DNA repair pathway may help increase the efficacy of irradiation while minimizing potential side effects . Han et al. first proved in 2013 that radiation resistance triggered by ERG overexpression increased the efficiency of DNA repair with an amplified expression of γ-H2AX, which could be reversed via PARP1 inhibition. They demonstrated that Olaparib radiosensitized ERG-positive cells by a factor of 1.52 (±0.03) in comparison to ERG-negative cells . In 2015, Gani et al. demonstrated in vitro that AZD-2281 (Olaparib) sensitized 22Rv1 cells to radiation, both under normal oxygen conditions (oxia) and in the presence of acute and chronic hypoxia. In addition, they performed an in vivo study where they showed that combining AZD-2281 with fractionated RT led to a significant delay in tumor growth and increased clonogenic cell death without increasing gut toxicity . Mansour et al. (2017) proved that PTEN plays a role in the repair of DNA double-strand breaks (DSBs) through homologous recombination (HR), as evidenced by increased sensitivity to Olaparib. Their findings showed that while the loss of PTEN is associated with a poorer prognosis in PCa, it may actually indicate a better response to radiotherapy. Additionally, they presented evidence suggesting that PTEN can serve as a biomarker for predicting the response to PARPi as radiosensitizing agents. These findings collectively suggest that PTEN is involved in maintaining genomic stability by delaying the progression of damaged cells into the G2/M phase, thereby providing time for HR-mediated repair of DSBs. Moreover, they identified the PTEN status in PCa as a potential predictor of both radiotherapy and PARPi response, alone or in combination . In 2018, Van de Ven et al. showed that cells resistant to irradiation and tumors derived from a PTEN/Trp53-deficient mouse model of advanced PCa exhibited increased sensitivity to radiation after being treated with NanoOlaparib, a lipid-based injectable nanoformulation of Olaparib. This radiosensitivity was accompanied by changes in the expression of γ-H2AX, which were dependent on the radiation dose and specific to NanoOlaparib. In animals, the combination of NanoOlaparib and radiation tripled the median mouse overall survival (OS) when compared with RT alone, and up to 50% of mice achieved a complete response after 13 weeks . In the same year, Oing et al. reported that BCL2 inhibited the NHEJ repair of DSBs by sequestering the KU80 protein outside the nucleus. They also found that this effect is linked to a shift in DNA repair mechanisms towards error-prone PARP1-dependent end-joining (PARP1-EJ). To support this, they provided in vitro evidence that targeting this repair switch using a PARPi (Olaparib) could selectively enhance the radiosensitivity of cells overexpressing BCL2, offering a promising therapeutic approach. They also corroborated these findings by evaluating retrospectively the impact of BCL2 expression on the clinical outcomes of patients who had been given RT after radical prostatectomy (RP) . With this background, Köcher et al. introduced a functional assay in freshly collected tumor samples from PCa patients that enables the identification of the repair switch to the alternative PARP1-EJ pathway. They demonstrated that an ex vivo assay could be used to detect radiosensitivity in tumor biopsies, helping to personalize treatments . Most recently, Fan et al. demonstrated in LNCaP cells that loss of RB1 enhanced RT DNA damage, inhibiting cell proliferation and provoking cellular senescence through a TP53-dependent pathway. However, when TP53 and RB1 are both deleted, cell proliferation is increased, which facilitates the appearance of castration resistance and RT resistance. Nevertheless, when combined with a PARP1 inhibitor, radiosensitivity was restored . To sum up, all these preclinical trials have shown that the use of PARPi blocking the DDR pathway in combination with RT enhances tumor cell death, as they are not able to repair the damaged DNA caused by RT.
Different randomized clinical trials have proven oncological benefits from the combination of RT and ADT in high-risk and locally advanced PC . However, even with this treatment, approximately 50% of them will experience biochemical recurrence , indicating that better therapeutic regimens are needed. On the other hand, different studies have shown benefits when combining PARPi and ADT, as PARP-1 inhibition suppresses the growth of AR-positive PCa cells. Thus, targeting PARP-1 in PCa seems promising, given that both DNA repair and AR-mediated transcription depend on PARP-1 function . Finally, as shown before, PARPi have exhibited their capacity to radiosensitize tumors in PCa preclinical studies . Together, this opens the research question of combining PARPi, RT, and ADT as a triple therapy. To clinically establish the potential synergy between PARPi, RT, and ADT, an ongoing phase II randomized CT known as NADIR (NCT04037254) is currently investigating this approach . In this trial, 170–180 men with localized high-risk PCa will be enrolled. All patients will receive DE-IMRT and 24 months of ADT and will be randomized to receive or not niraparib for 12 months. The primary endpoint is the proportion of patients with a PSA under 0.1ng/mL after the end of the treatment. The results of this trial, which are still pending, could potentially open up new horizons for the treatment of high-risk PCa.
Radiotherapy is a key treatment for PCa that has traditionally been used for the localized and locally advanced stages . However, more evidence is emerging regarding the treatment of the primary tumor in newly diagnosed metastatic PCa. In fact, a secondary analysis of the STAMPEDE trial showed a benefit in OS when treating the primary tumor with RT in patients with less than three bone metastases or with M1a disease . Moreover, two phase II randomized CT studies investigated the role of RT as a metastasis-directed therapy (MDT). The first one, STOMP (n = 62), showed a longer ADT-free survival with an MDT than with surveillance . The second one, ORIOLE, demonstrated a lower progression rate within 6 months with MDT vs. surveillance . Recently, there has been a significant improvement in PFS in favor of MDT (HR: 0.44, p < 0.001) when combining the results of STOMP and ORIOLE trials . These data show that radiotherapy is becoming more important in different PCa settings over time. However, there are two main difficulties related to RT as a PCa treatment: the first is the radio-resistance, and the second is the related toxicity. In this scenario, and with the aim of overcoming this issue, different combinations of treatment strategies are arising. One of the most popular ones is the combination of PARPi and RT, which holds solid scientific evidence, as shown in this article. Nonetheless, to date, only results from a few preclinical studies are available to evaluate the impact of combining PARPi and RT in PCa. Regarding these studies, it is important to note that the majority of them were performed using cell lines derived from metastatic tissue of advanced PCa or had selected mutations . That means that these positive results should be further clinically proven in different PCa settings. Another limitation of these studies is the sample size and the only use of Olaparib among all PARPi. All PARPi act similarly, but each of them has some particularities, such as a different potency for PARP trapping , meaning that these positive results with Olaparib may not be extrapolated to different PARPi. And the other way around, it is possible that other PARPi such as talazoparib, rucaparib, or niraparib achieve better outcomes. Again, this hypothesis should be further studied. NADIR is the only current clinical trial evaluating the combination of PARPi and RT. In addition, it is the only study assessing the impact of triple therapy with ADT, RT, and PARPi. This approach is supported by the theory that ADT can enhance the radiosensitivity of PCa cells by reducing both the hypoxic fraction and the testosterone-induced increase in DNA repair mechanisms . In addition, the DNA-PARP repair pathways are closely connected with the androgen receptor signaling pathway, which is the main regulation pathway for tumor growth in PCa and a therapeutic target for ADT . This clinical trial opens the door for a new horizon in PCa treatment. Actually, there is enough preclinical evidence that encourages the use of PARPi and RT ; abundant clinical evidence that supports the positive effect of combining PARPi and ADT as well as PARPi and new antiandrogens ; and solid evidence for the use of RT and ADT . So maybe it is time to explore the combination of the three therapies. Currently, NADIR is evaluating this combination in high-risk localized and locally advanced PCa , but perhaps this rationale could also be used in high-risk biochemical failure after a treatment with a curative intention, in low-volume metastatic hormone-sensitive PCa (mHSPC), or even in oligoprogressions in mCRPC. In fact, emerging data in mCRPC demonstrate that at this stage, the HR defects render these tumors sensitive to PARP inhibition. It seems that there is a dependency on the androgen receptor (AR) to maintain HR gene expression and activity. In addition, after ADT, PARP-mediated repair pathways are upregulated as a mechanism for tumor cell survival, which makes them more sensitive to PARPi. Asim et al. demonstrated in vivo a synthetic lethality between ADT and PARPi, suggesting that ADT may functionally impair HR before the appearance of castration resistance. This finding could potentially be used clinically in advanced or high-risk PCa . To sum up, the combination between PARPi and RT could potentially radiosensitize PCa cells, achieving better oncological outcomes while minimizing undesirable toxicities. However, this combination should be further studied in phase II and phase III clinical trials. In addition, incipient evidence supports the rationale to explore triple combinations with PARPi, RT, and ADT. Nevertheless, this new combination therapy for PCa will have to face the risk of increasing the percentage of severe adverse events, which may be one of the most important limitations, making a well-designed phase I CT essential to determining the MTD.
RT induces cell death by causing various types of DNA damage, while PARPi inhibit the DNA repair pathway. This rationale makes PARPi a potent radiosensitizer, which has been demonstrated in different tumors. Currently, some preclinical trials have demonstrated positive results with RT and Olaparib in PCa, and an ongoing phase II clinical trial is evaluating the combination of RT, ADT, and niraparib in high-risk and locally advanced PCa. Nevertheless, more randomized clinical trials are necessary to prove the value of this combination with different PARPi and different PCa settings.
|
Comparing human and chimpanzee temporal lobe neuroanatomy reveals modifications to human language hubs beyond the frontotemporal arcuate fasciculus | aa4a2105-c48d-488d-8c03-231232a18a13 | 9282369 | Anatomy[mh] | Tractography from ATL and pMTG. By using probabilistic tractography from high-resolution diffusion-weighted images of 50 humans and 29 chimpanzees, we generated tractograms originating from two seeds in each hemisphere, the ATL and the pMTG. The tractograms from the left ATL seed revealed an extensive ventral system of white-matter pathways (including a well-defined inferior fronto-occipital fascicle) in both humans and chimpanzees . The tractograms did not substantially differ between the two species, reaching the ventral prefrontal cortex via the extreme capsule and extending posteriorly along the superior and middle temporal gyri to the posterior temporal lobe. In humans, probabilistic tracking from the left pMTG seed showed that the ventral white-matter system extends into the right hemisphere via the corpus callosum and into the left dorsal pathways via the connection between the posterior superior temporal sulcus (STS) and the inferior parietal lobe. In chimpanzees, these tractograms were similar with regard to the interhemispheric connections, but connectivity to the dorsal stream was much weaker than in humans. In the right hemisphere, the connectivity patterns mimicked what was found for the left hemisphere in both human and chimpanzees ( SI Appendix , Fig. S1 ). The tractograms’ visualization illustrates overlap between all individuals in order to take into account the intraspecies variability of tractograms’ anatomical distribution. Reconstructing Canonical Tracts. In order to better understand the interspecies differences in ATL and pMTG tractograms, we proceeded to compare their anatomy in relation to seven canonical language tracts. The three portions of the AF—the frontotemporal (in other nomenclatures also known as “direct,” “long,” or” classical”) (reviewed in ref. ), frontoparietal (“anterior”/“indirect”/“perisylvian”), and parietotemporal (“posterior”/indirect/perisylvian)—were defined anatomically according to Catani et al. . This AF “tripartite subdivision” in this work will represent the dorsal stream, while the inferior fronto-occipital fasciculus (IFOF), inferior longitudinal fasciculus (ILF), middle longitudinal fasciculus (MdLF), and uncinate fasciculus (UF) form the ventral stream . In chimpanzees, ventral canonical tracts were extracted from the white-matter atlas by ref. and calculated according to the recipes proposed by the same authors in the present human sample. The AF subdivisions in humans were reconstructed following previously implemented recipes . Subsequently, we adapted the same AF regions of interest (ROIs) in the chimpanzee. This protocol resulted in satisfactory results in both human and chimpanzees. In both species, all three AF portions were present bilaterally, with connections between the frontal and temporal areas and also, branching toward the parietal cortex ( and SI Appendix , Fig. S2 have more details). Quantification of Interspecies Ventral and Dorsal Pathway Similarities. Having examined the connectivity of both pMTG and ATL in both species and characterized the course of all major long-range connections reaching these areas, we next examined the specific contribution of the canonical—dorsal and ventral—tracts to the pMTG and ATL connectivity patterns. For that, we used linear regression analyses on a “tract load” measure defined as the proportion between the volume of the overlap between tractograms created on the basis of pMTG or ATL seeds and each separate canonical tract weighted by the volume of each separate canonical tract (note that all the statistical analyses will be hereafter performed using this dependent variable). At the whole-brain level, quantification of (dis-)similarities indicated a main effect of species for tractography from both pMTG and ATL seeds ( P values < 0.001) and an interaction between hemisphere, stream, and species ( P values < 0.001) ( SI Appendix , Tables S1 and S2 ). Within the left hemisphere, humans and chimpanzees differed significantly in how the seeds connected to the dorsal vs. ventral stream [species by stream interaction; pMTG seed: F (1,77) = 190.4, P < 0.001; ATL seed: F (1,77) = 287.5, P < 0.001]. Results concerning the right hemisphere showed similar effects ( SI Appendix , Tables S1–S3 ). The differences we found visually and statistically (above) were further corroborated by the tract load analysis for each individual tract. For that, we quantified how separate tracts contribute in explaining interspecies differences using separate linear models using the models’ R 2 as a measure of effect size. For both hemispheres, the interspecies differences for the pMTG seed we observed were best explained by tracts forming the dorsal stream (AF) and in particular, the parietotemporal branch (left R 2 = 0.71, P < 0.001; right R 2 = 0.7, P < 0.001). Species also explained the variance for the pMTG connections toward the ventral tracts. However, within this stream, humans showed more overlap with pMTG tractograms only for the left MdLF (left R 2 = 0.45, P < 0.001). Conversely, ILF, IFOF, and UF overlapped more strongly with the pMTG in the chimpanzees in both left ( R 2 = 0.5, P < 0.001; R 2 = 0.36, P < 0.001; and R 2 = 0.24, P < 0.001, respectively) and right ( R 2 = 0.43, P < 0.001; R 2 = 0.53, P < 0.001; and R 2 = 0.37, P < 0.001, respectively) hemispheres. For the left ATL, there was no interspecies difference in connectivity with any of the AF portions, whereas the ventral tracts differed between species, with the exception of the IFOF. For the left ATL seed, species explained the variance in tract load for the following canonical ventral tracts: ILF ( R 2 = 0.9, P < 0.001), UF ( R 2 = 0.49, P < 0.001), and MdLF ( R 2 = 0.42, P < 0.001). Results in the right hemisphere were similar. The contribution of specific tracts to the interspecies difference is specified in .
By using probabilistic tractography from high-resolution diffusion-weighted images of 50 humans and 29 chimpanzees, we generated tractograms originating from two seeds in each hemisphere, the ATL and the pMTG. The tractograms from the left ATL seed revealed an extensive ventral system of white-matter pathways (including a well-defined inferior fronto-occipital fascicle) in both humans and chimpanzees . The tractograms did not substantially differ between the two species, reaching the ventral prefrontal cortex via the extreme capsule and extending posteriorly along the superior and middle temporal gyri to the posterior temporal lobe. In humans, probabilistic tracking from the left pMTG seed showed that the ventral white-matter system extends into the right hemisphere via the corpus callosum and into the left dorsal pathways via the connection between the posterior superior temporal sulcus (STS) and the inferior parietal lobe. In chimpanzees, these tractograms were similar with regard to the interhemispheric connections, but connectivity to the dorsal stream was much weaker than in humans. In the right hemisphere, the connectivity patterns mimicked what was found for the left hemisphere in both human and chimpanzees ( SI Appendix , Fig. S1 ). The tractograms’ visualization illustrates overlap between all individuals in order to take into account the intraspecies variability of tractograms’ anatomical distribution.
In order to better understand the interspecies differences in ATL and pMTG tractograms, we proceeded to compare their anatomy in relation to seven canonical language tracts. The three portions of the AF—the frontotemporal (in other nomenclatures also known as “direct,” “long,” or” classical”) (reviewed in ref. ), frontoparietal (“anterior”/“indirect”/“perisylvian”), and parietotemporal (“posterior”/indirect/perisylvian)—were defined anatomically according to Catani et al. . This AF “tripartite subdivision” in this work will represent the dorsal stream, while the inferior fronto-occipital fasciculus (IFOF), inferior longitudinal fasciculus (ILF), middle longitudinal fasciculus (MdLF), and uncinate fasciculus (UF) form the ventral stream . In chimpanzees, ventral canonical tracts were extracted from the white-matter atlas by ref. and calculated according to the recipes proposed by the same authors in the present human sample. The AF subdivisions in humans were reconstructed following previously implemented recipes . Subsequently, we adapted the same AF regions of interest (ROIs) in the chimpanzee. This protocol resulted in satisfactory results in both human and chimpanzees. In both species, all three AF portions were present bilaterally, with connections between the frontal and temporal areas and also, branching toward the parietal cortex ( and SI Appendix , Fig. S2 have more details).
Having examined the connectivity of both pMTG and ATL in both species and characterized the course of all major long-range connections reaching these areas, we next examined the specific contribution of the canonical—dorsal and ventral—tracts to the pMTG and ATL connectivity patterns. For that, we used linear regression analyses on a “tract load” measure defined as the proportion between the volume of the overlap between tractograms created on the basis of pMTG or ATL seeds and each separate canonical tract weighted by the volume of each separate canonical tract (note that all the statistical analyses will be hereafter performed using this dependent variable). At the whole-brain level, quantification of (dis-)similarities indicated a main effect of species for tractography from both pMTG and ATL seeds ( P values < 0.001) and an interaction between hemisphere, stream, and species ( P values < 0.001) ( SI Appendix , Tables S1 and S2 ). Within the left hemisphere, humans and chimpanzees differed significantly in how the seeds connected to the dorsal vs. ventral stream [species by stream interaction; pMTG seed: F (1,77) = 190.4, P < 0.001; ATL seed: F (1,77) = 287.5, P < 0.001]. Results concerning the right hemisphere showed similar effects ( SI Appendix , Tables S1–S3 ). The differences we found visually and statistically (above) were further corroborated by the tract load analysis for each individual tract. For that, we quantified how separate tracts contribute in explaining interspecies differences using separate linear models using the models’ R 2 as a measure of effect size. For both hemispheres, the interspecies differences for the pMTG seed we observed were best explained by tracts forming the dorsal stream (AF) and in particular, the parietotemporal branch (left R 2 = 0.71, P < 0.001; right R 2 = 0.7, P < 0.001). Species also explained the variance for the pMTG connections toward the ventral tracts. However, within this stream, humans showed more overlap with pMTG tractograms only for the left MdLF (left R 2 = 0.45, P < 0.001). Conversely, ILF, IFOF, and UF overlapped more strongly with the pMTG in the chimpanzees in both left ( R 2 = 0.5, P < 0.001; R 2 = 0.36, P < 0.001; and R 2 = 0.24, P < 0.001, respectively) and right ( R 2 = 0.43, P < 0.001; R 2 = 0.53, P < 0.001; and R 2 = 0.37, P < 0.001, respectively) hemispheres. For the left ATL, there was no interspecies difference in connectivity with any of the AF portions, whereas the ventral tracts differed between species, with the exception of the IFOF. For the left ATL seed, species explained the variance in tract load for the following canonical ventral tracts: ILF ( R 2 = 0.9, P < 0.001), UF ( R 2 = 0.49, P < 0.001), and MdLF ( R 2 = 0.42, P < 0.001). Results in the right hemisphere were similar. The contribution of specific tracts to the interspecies difference is specified in .
We reassessed the connectional basis of language in the light of developments in both our understanding of language and the emergence of increasingly high-quality comparative neuroimaging data and methods. Using a large high-quality in vivo chimpanzee dataset, we show that dorsal connectivity of the pMTG to both frontal and parietal cortices is much more extensive in the human brain. By directly comparing the organization of the pMTG- and ATL-based tractograms between the species (and accounting for both intra- and interspecies variability), we were able to identify structural changes that are unique to humans and may have laid the foundation for full-fledged language in the human lineage. Additionally, we detected that AF in chimpanzees obeys the same threefold division as in humans, with one large connection between frontal and temporal lobes and two shorter ones: frontoparietal and parietotemporal. Between-Species Differences in pMTG Connectivity. Our results on the pMTG-related white-matter connections in humans are in line with previous anatomical findings by Turken and Dronkers , with tractograms encompassing extensive portions of temporal and parietal lobes. Importantly, our analysis of chimpanzees confirmed the uniqueness of the human expansion of the dorsal language tracts. In humans, tractograms originating from the pMTG overlapped with all temporo-parietofrontal connections of the AF, whereas the same tractograms in chimpanzees were confined mainly to the temporal lobe. Further, pMTG connectivity differed in the ventral pathway; with the exception of the MdLF, the overlap between the pMTG tractogram and the ventral language stream was stronger in chimpanzees than in humans. A plethora of studies indicates that pMTG has a unique role in human language. It has been repeatedly postulated to act as a lexical hub . It is also well established that damage to pMTG can induce paragrammatism and can impair object naming and/or impede (syntactic) comprehension [e.g., due to the presence of brain tumors or after stroke ]. Functional studies have demonstrated that the pMTG mediates the functional integration of novel words into the mental lexicon (e.g., refs. – ), and previous evolutionary neuroscience studies have shown that this area has a human-unique pattern of white-matter connectivity . Importantly, the evolutionary development of pMTG as a white-matter hub accommodating new connections between frontal and temporal regions aligns well with observations from human development. Indeed, early in life—before language is acquired—the structural connection between frontal and temporal cortices is vastly underdeveloped, joining premotor regions solely to the most superior portions of the temporal cortex . In these newborns, there is also no functional connectivity between frontal and temporal regions. For older children, the AF remains immature at the age of seven , whereas AF volume and fractional anisotropy both increase with age in adolescence . A robust connection between inferior frontal and deep temporal areas (including middle and inferior gyri) through the AF is found only in adulthood . Interestingly, other evidence supports the crucial role of the AF in language/cognitive abilities, such as phonological processing , language learning , naming and speech rate and efficiency , or even singing and musical training . In our study, the observed broad expansion of pMTG connectivity in humans is mainly explained by two branches of the AF—the frontotemporal branch and especially, the parietotemporal branch. Importantly, these effects are present even when taking intraspecies variability into account. These findings suggest that, as the AF expanded in human evolution, the modifications were concentrated in the frontoparietal and parietotemporal branches and further, that the bundle connecting pMTG to parietal areas underwent particularly strong selection. The parietotemporal connection of the AF in humans is of special interest because of its putative role in language learning. Evidence suggests that the connection between the pMTG and inferior parietal cortex permits phonological information to be held in working memory as part of the larger phonological loop system . Recent work suggests that this parietotemporal portion may control information about the order of phonological information, while the frontoparietal component is involved in transferring this order information to portions of the left inferior frontal gyrus . Further, there is evidence for human-unique differences in structure, as asymmetry of the thickness of the STS has been documented in humans but not in chimpanzees . Although we observed trends that frontotemporal AF tractograms explained more variance in the right hemisphere compared with the left, interspecies differences were statistically significant in both hemispheres. Therefore, with the present results we cannot claim clear species differences in laterality. Human ATL Connectivity Specializations. Like the pMTG, the ATL has been postulated to have a crucial role in language as a semantic hub. Indeed, “the hub-and-spoke” model by ref. proposes that the left ATL is involved in binding together perceptually-based semantic representations into coherent concepts. For this reason, we explored whether ATL-related white-matter organization could also differ between humans and chimpanzees. In an opposite pattern to pMTG connectivity and as to be expected, the left ATL scarcely connected with AF in either species, but ventral pathway connectivity was significantly different for nearly all relevant tracts. When comparing humans with chimpanzees, ILF, MdLF, and UF were the best predictors of interspecies differences with regard to the left ATL connectivity. The ILF is a large association tract that has expanded laterally in the human and great ape lineage . The degeneration of ILF can produce semantic and lexical retrieval difficulties (reviewed in ref. ). The UF connects the ATL to orbitofrontal cortex and plays a role in semantic and syntactic functions . Although direct stimulation of the UF does not appear to cause language errors , lesions to the tract are linked to lexical deficits . The pattern of connections was similar in the right hemispheres . Ventral Pathway Modifications. MdLF has increasingly been implicated in language processing , but its connectivity to human language hubs has never been compared with its connectivity to analogous regions in other species. Here, our direct comparison between human and chimpanzee showed that the MdLF is the only tract showing human-unique expansions in both ATL and pMTG hubs. Tractograms from pMTG appeared to be more strongly integrated with IFOF in the chimpanzees , while IFOF connectivity to ATL was low in both species. In chimpanzees, three of the four ventral pathway tracts (ILF, IFOF, and UF) showed a greater proportion of connectivity to pMTG than in humans, while the reverse pattern was observed in the ATL. Given these anatomical findings and previous evidence that the ATL/IFOF system plays an important role in conceptual processing [e.g., humans or vervet monkeys ], our results add light to the view that concepts rely on a white-matter structure that is shared between humans and other primates. Reweighting of ventral pathways with respect to the two species may also reflect recruitment of areas originally used for visual processing in the pMTG in humans for language processing. The larger contribution of IFOF and ILF to chimpanzee pMTG connectivity may be due to its anatomical location adjacent to visual association areas . Overall, these data provide more details on the relationship between ventral pathways and the temporal association cortex in humans and chimpanzees, which has only recently been characterized using comparative neuroimaging . Additional Considerations. Our initial claims stemmed from a theory-driven interest in left-hemispheric temporal lobe connectivity; however, to gain insight into whether these patterns were consistent across hemispheres, we ran the same analyses for the right hemisphere. We found similar results, suggesting that modifications to white-matter organization in humans occurred bilaterally. Further, frontotemporal tracts loaded slightly higher on the right hemisphere human pMTG than the left hemisphere; however, our study was not designed to test between-hemisphere differences. In recent decades, evidence for right hemisphere specializations has been accumulating, especially with regard to the role of right parietofrontal circuits in tool action planning and toolmaking , behaviors that are relevant to human evolution. Our findings are consistent with the possibility that language and tool use may rely on similar modifications of dorsal pathways in the human brain occurring in the left and right hemispheres, respectively. In addition to tool-related cognitive processing, there is evidence that pMTG and ATL are important for other behaviors beyond language in humans. The pMTG has been implicated in object motion processing, possibly due to its proximity to visual motion area MT+ . The ATL is involved in semantic and affective cognition, including picture recognition, gustatory and olfactory memory, emotional memory, and storage of socially-relevant entities (reviewed in ref. ). Thus, the putative evolutionary processes causing the reweighting of dorsal and ventral tract connectivity to pMTG and the increase of ventral tract connections to ATL in humans may have been due to selection for modifications in tool-related cognition, affective cognition, and forms of semantic processing that are not limited to language. Determining the homologous cortical territories in chimpanzees for human pMTG and ATL is challenging because the methods available for delineating these regions in humans (functional Magentic Resonance Imaging (fMRI) and in some clinical cases, direct stimulation) are not feasible in apes. We, therefore, rely on previous structural data from chimpanzees, including cortical parcellations (e.g., ref. ), sulcal maps , tractography of extrastriate and temporal areas , and myelin maps . It is worth mentioning that our ROIs for humans and chimpanzees were similar but not identical in proportion to the total intracranial volume, with chimpanzee ROIs making up a smaller proportion . However, this should not bias our results for two reasons. First, the ROIs are confined to association cortex, which has expanded disproportionately in humans compared with great apes and other primates . Second, our core analysis relies on an analysis of proportion of tract loads in relation to ROIs rather than absolute volumes. Comparative anatomical studies endeavor to identify putative evolutionary modifications to the brain; however, they cannot determine the mechanisms that are responsible for the differences in neuroanatomy. As such, we cannot disambiguate whether these differences in connectivity are environmentally or genetically driven or (most likely) are a combination. In order to shed light on these interlinked factors, future studies are needed that compare individuals between species across the life span. In the case of chimpanzees, neuroanatomical comparisons of individuals from different environments (i.e., captive vs. wild populations) may also shed light on how natural selection shaped human brains by permitting the characterization of the flexibility of development and the amount of individual variation in connectivity there may be within this species.
Our results on the pMTG-related white-matter connections in humans are in line with previous anatomical findings by Turken and Dronkers , with tractograms encompassing extensive portions of temporal and parietal lobes. Importantly, our analysis of chimpanzees confirmed the uniqueness of the human expansion of the dorsal language tracts. In humans, tractograms originating from the pMTG overlapped with all temporo-parietofrontal connections of the AF, whereas the same tractograms in chimpanzees were confined mainly to the temporal lobe. Further, pMTG connectivity differed in the ventral pathway; with the exception of the MdLF, the overlap between the pMTG tractogram and the ventral language stream was stronger in chimpanzees than in humans. A plethora of studies indicates that pMTG has a unique role in human language. It has been repeatedly postulated to act as a lexical hub . It is also well established that damage to pMTG can induce paragrammatism and can impair object naming and/or impede (syntactic) comprehension [e.g., due to the presence of brain tumors or after stroke ]. Functional studies have demonstrated that the pMTG mediates the functional integration of novel words into the mental lexicon (e.g., refs. – ), and previous evolutionary neuroscience studies have shown that this area has a human-unique pattern of white-matter connectivity . Importantly, the evolutionary development of pMTG as a white-matter hub accommodating new connections between frontal and temporal regions aligns well with observations from human development. Indeed, early in life—before language is acquired—the structural connection between frontal and temporal cortices is vastly underdeveloped, joining premotor regions solely to the most superior portions of the temporal cortex . In these newborns, there is also no functional connectivity between frontal and temporal regions. For older children, the AF remains immature at the age of seven , whereas AF volume and fractional anisotropy both increase with age in adolescence . A robust connection between inferior frontal and deep temporal areas (including middle and inferior gyri) through the AF is found only in adulthood . Interestingly, other evidence supports the crucial role of the AF in language/cognitive abilities, such as phonological processing , language learning , naming and speech rate and efficiency , or even singing and musical training . In our study, the observed broad expansion of pMTG connectivity in humans is mainly explained by two branches of the AF—the frontotemporal branch and especially, the parietotemporal branch. Importantly, these effects are present even when taking intraspecies variability into account. These findings suggest that, as the AF expanded in human evolution, the modifications were concentrated in the frontoparietal and parietotemporal branches and further, that the bundle connecting pMTG to parietal areas underwent particularly strong selection. The parietotemporal connection of the AF in humans is of special interest because of its putative role in language learning. Evidence suggests that the connection between the pMTG and inferior parietal cortex permits phonological information to be held in working memory as part of the larger phonological loop system . Recent work suggests that this parietotemporal portion may control information about the order of phonological information, while the frontoparietal component is involved in transferring this order information to portions of the left inferior frontal gyrus . Further, there is evidence for human-unique differences in structure, as asymmetry of the thickness of the STS has been documented in humans but not in chimpanzees . Although we observed trends that frontotemporal AF tractograms explained more variance in the right hemisphere compared with the left, interspecies differences were statistically significant in both hemispheres. Therefore, with the present results we cannot claim clear species differences in laterality.
Like the pMTG, the ATL has been postulated to have a crucial role in language as a semantic hub. Indeed, “the hub-and-spoke” model by ref. proposes that the left ATL is involved in binding together perceptually-based semantic representations into coherent concepts. For this reason, we explored whether ATL-related white-matter organization could also differ between humans and chimpanzees. In an opposite pattern to pMTG connectivity and as to be expected, the left ATL scarcely connected with AF in either species, but ventral pathway connectivity was significantly different for nearly all relevant tracts. When comparing humans with chimpanzees, ILF, MdLF, and UF were the best predictors of interspecies differences with regard to the left ATL connectivity. The ILF is a large association tract that has expanded laterally in the human and great ape lineage . The degeneration of ILF can produce semantic and lexical retrieval difficulties (reviewed in ref. ). The UF connects the ATL to orbitofrontal cortex and plays a role in semantic and syntactic functions . Although direct stimulation of the UF does not appear to cause language errors , lesions to the tract are linked to lexical deficits . The pattern of connections was similar in the right hemispheres .
MdLF has increasingly been implicated in language processing , but its connectivity to human language hubs has never been compared with its connectivity to analogous regions in other species. Here, our direct comparison between human and chimpanzee showed that the MdLF is the only tract showing human-unique expansions in both ATL and pMTG hubs. Tractograms from pMTG appeared to be more strongly integrated with IFOF in the chimpanzees , while IFOF connectivity to ATL was low in both species. In chimpanzees, three of the four ventral pathway tracts (ILF, IFOF, and UF) showed a greater proportion of connectivity to pMTG than in humans, while the reverse pattern was observed in the ATL. Given these anatomical findings and previous evidence that the ATL/IFOF system plays an important role in conceptual processing [e.g., humans or vervet monkeys ], our results add light to the view that concepts rely on a white-matter structure that is shared between humans and other primates. Reweighting of ventral pathways with respect to the two species may also reflect recruitment of areas originally used for visual processing in the pMTG in humans for language processing. The larger contribution of IFOF and ILF to chimpanzee pMTG connectivity may be due to its anatomical location adjacent to visual association areas . Overall, these data provide more details on the relationship between ventral pathways and the temporal association cortex in humans and chimpanzees, which has only recently been characterized using comparative neuroimaging .
Our initial claims stemmed from a theory-driven interest in left-hemispheric temporal lobe connectivity; however, to gain insight into whether these patterns were consistent across hemispheres, we ran the same analyses for the right hemisphere. We found similar results, suggesting that modifications to white-matter organization in humans occurred bilaterally. Further, frontotemporal tracts loaded slightly higher on the right hemisphere human pMTG than the left hemisphere; however, our study was not designed to test between-hemisphere differences. In recent decades, evidence for right hemisphere specializations has been accumulating, especially with regard to the role of right parietofrontal circuits in tool action planning and toolmaking , behaviors that are relevant to human evolution. Our findings are consistent with the possibility that language and tool use may rely on similar modifications of dorsal pathways in the human brain occurring in the left and right hemispheres, respectively. In addition to tool-related cognitive processing, there is evidence that pMTG and ATL are important for other behaviors beyond language in humans. The pMTG has been implicated in object motion processing, possibly due to its proximity to visual motion area MT+ . The ATL is involved in semantic and affective cognition, including picture recognition, gustatory and olfactory memory, emotional memory, and storage of socially-relevant entities (reviewed in ref. ). Thus, the putative evolutionary processes causing the reweighting of dorsal and ventral tract connectivity to pMTG and the increase of ventral tract connections to ATL in humans may have been due to selection for modifications in tool-related cognition, affective cognition, and forms of semantic processing that are not limited to language. Determining the homologous cortical territories in chimpanzees for human pMTG and ATL is challenging because the methods available for delineating these regions in humans (functional Magentic Resonance Imaging (fMRI) and in some clinical cases, direct stimulation) are not feasible in apes. We, therefore, rely on previous structural data from chimpanzees, including cortical parcellations (e.g., ref. ), sulcal maps , tractography of extrastriate and temporal areas , and myelin maps . It is worth mentioning that our ROIs for humans and chimpanzees were similar but not identical in proportion to the total intracranial volume, with chimpanzee ROIs making up a smaller proportion . However, this should not bias our results for two reasons. First, the ROIs are confined to association cortex, which has expanded disproportionately in humans compared with great apes and other primates . Second, our core analysis relies on an analysis of proportion of tract loads in relation to ROIs rather than absolute volumes. Comparative anatomical studies endeavor to identify putative evolutionary modifications to the brain; however, they cannot determine the mechanisms that are responsible for the differences in neuroanatomy. As such, we cannot disambiguate whether these differences in connectivity are environmentally or genetically driven or (most likely) are a combination. In order to shed light on these interlinked factors, future studies are needed that compare individuals between species across the life span. In the case of chimpanzees, neuroanatomical comparisons of individuals from different environments (i.e., captive vs. wild populations) may also shed light on how natural selection shaped human brains by permitting the characterization of the flexibility of development and the amount of individual variation in connectivity there may be within this species.
The results of our study indicate that two hubs critical for language, pMTG and ATL, have undergone changes in their connectivity since our evolutionary divergence from chimpanzees. We found that, compared with chimpanzees, human pMTG has expanded AF connectivity, with the largest increase in the parietotemporal branch, and decreased ventral pathway connectivity, particularly with ILF and IFOF. Human ATL has more robust connections with ventral pathways, with the exception of the IFOF. Finally, MdLF is the only tract showing interspecies differences for both ATL and pMTG hubs. Together, these data suggest that the evolutionary modifications to human language streams not only encompass the AF but rather, include an increase of dorsal stream connectivity to pMTG and ventral stream connectivity to ATL with a concomitant reduction in ventral stream connectivity to the pMTG.
Sample. High-resolution DWI data for 50 healthy human subjects (mean age = 43.7 ± 21.6 y) were acquired using a Siemens Prisma Fit 3T scanner and a 32-channel head coil at the Donders Center for Cognitive Neuroimaging, Nijmegen. Diffusion-weighted images were acquired with a simultaneous multislice diffusion‐weighted echo planar imaging (EPI) sequence. Acquisition parameters were the following: multiband factor = 3; TR (repetition time) = 2,282 ms; TE (echo time) = 71.2 ms; in-plane acceleration factor = 2; voxel size = 2 × 2 × 2 mm 3 ; nine unweighted scans; 100 diffusion-encoding gradient directions in multiple shells; b values = 1,250 and 2,500 s/mm 2 ; and Taq (total acquisition time) = 8 min, 29 s. A high-resolution T1 anatomical scan was obtained for spatial processing of the DWI data using the MP2RAGE sequence with the following parameters: 176 slices; voxel size = 1 × 1 × 1 mm 3 ; TR = 6 s; TE = 2.34 ms; and Taq = 7 min, 32 s. MP2RAGE data were processed using the Oxford Center for Functional Magnetic Resonance Imaging of the Brain (FMRIB) software library (FSL 5.0.10; https://www.fmrib.ox.ac.uk/fsl ) and skull stripped with Brain Extraction Tool (BET). DWI images were preprocessed to realign and correct for eddy current (using Statistical Parametric Mapping software - SPM12) and for artifacts from head and/or cardiac motion using robust tensor modeling [Donders Institute Diffusion Imaging toolbox ]. After preprocessing, diffusion parameters were estimated at each voxel using BedpostX. Tensor reconstruction using weighted least squares fit was performed via Diffusion Tensor Imaging fitting (DTIFit) within FMRIB’s Diffusion Toolbox (FDT) to create Diffusion Tensor Imaging (DTI scalar images, including the fractional anisotropy (FA), mean diffusivity (MD), and three eigenvalues [FSL 5.0.10 ]. This study was approved by the local ethics committee (Commissie Mensgebonden Onderzoek (CMO, i.e., the Committee for Ethics of Research on Humans) Arnhem-Nijmegen, “Imaging Human Cognition,” CMO 2014/288). Subjects provided informed consent. Diffusion-weighted data from 29 chimpanzees ( Pan troglodytes ; 28 ± 17 y) were obtained from a data archive of scans obtained prior to the 2015 implementation of US Fish and Wildlife Service and NIH regulations governing research with chimpanzees. These scans were made available through the United States–based National Chimpanzee Brain Resource. All scans reported here were completed in 2012 and have been used in previous studies (e.g., refs. and ). Chimpanzees were housed at the Yerkes National Primate Research Center (YNPRC) in Atlanta, GA; procedures were carried out in accordance with protocols approved by the YNPRC and the Emory University Institutional Animal Care and Use Committee (approval no. YER-2001206). Following standard YNPRC veterinary procedures, chimpanzee subjects were immobilized with ketamine injections (2 to 6 mg/kg intramuscular) and then, anesthetized with an intravenous propofol drip (10 mg/kg per hour) prior to scanning. Subjects remained sedated for the duration of the scans and the time necessary for transport between their home cage and the scanner location. After scanning, primates were housed in a single cage for 6 to 12 h to recover from the effects of anesthesia before being returned to their home cage and cage mates. The well-being (activity and food intake) of chimpanzees was monitored twice daily after the scan by veterinary staff for possible postanesthesia distress. Anatomical and diffusion MRI scans were acquired in a Siemens 3T Trio scanner (Siemens Medical System). A standard circularly polarized birdcage coil was used to accommodate the large chimpanzee jaw, which does not fit in the standard phase-array coil used in humans. DWI data were collected with a single-shot, spin-echo EPI sequence; to minimize eddy-current effects, a dual spin-echo technique combined with bipolar gradients was used. Parameters were as follows; 41 slices were scanned at a voxel size of 1.8 × 1.8 × 1.8 mm, TR/TE was 5,900/86 ms, and matrix size was 72 × 128. Two DWI images were acquired for each of 60 diffusion directions, each with one of the possible left–right phase-encoding directions and eight averages, allowing for correction of susceptibility-related distortion . For each average of DWI images, six images without diffusion weighting (b = 0 s/mm 2 ) were also acquired with matching imaging parameters. High-resolution T1-weighted MRI images were acquired with a three-dimensional magnetization-prepared rapid gradient-echo sequence for all subjects. T2 images were previously acquired using parameters similar to a contemporaneous study on humans . Data preprocessing was achieved using the FSL software library of the FMRIB ( https://www.fmrib.ox.ac.uk/fsl ) . T1-weighted images were skull stripped with BET with some manual correction . To correct for eddy current and susceptibility distortion, FSL’s eddy_correct and topup implemented in MATLAB (MATLAB7; MathWorks) were used. FMRIB’s Diffusion Toolbox (FDT) was used to fit diffusion tensors, estimate mean diffusivity and fractional anisotropy, and run bedpostX to fit a voxel-wise model of diffusion tensors using a crossing fiber model with three fiber directions . A modified version of the Human Connectome Project minimal preprocessing pipeline was used to create registrations to a population-specific chimpanzee template. Template generation for chimpanzees has been previously described ; briefly, the PreFreeSurfer pipeline was used to align the T1w and T2w volumes of 29 individual chimpanzees to native anterior commissure–posterior commissure space. FSL was used to perform brain extraction, cross-modal registration, bias field correction, and nonlinear volume registration to atlas space. ROI Definition. For human participants, two binary masks were defined within the Montreal Neurological Institute (MNI) space using the SPM Marsbar extraction tool and the Automated Anatomical Labeling (AAL) atlas: the pMTG and the ATL (for both the left and right hemispheres separately). The pMTG mask was defined by restricting the middle temporal gyrus to its portion located posteriorly to the central sulcus [ y = −18 according to the methodology proposed by Turken and Dronkers ] ( SI Appendix , Fig. S2 ). The ATL mask was obtained by joining five parts: the middle and superior temporal poles and the anterior portions of the inferior, middle, and superior temporal gyri (terminating at y = −17). Subsequently, the masks were transferred to each individual’s diffusion space, where the voxels in the mask that had a 90% probability of being present in the original mask were included. This conservative threshold of 90% was chosen to ensure that there was no overlap between the ATL and pMTG masks in the same participant. Afterward, the masks were binarized. In chimpanzees, a similar protocol was followed, with masks manually drawn in the chimpanzee template corresponding to human areas using homologous sulcal and gyral landmarks in chimpanzees using recent sulcal/gyral maps for this species . Importantly for the delineation of the ATL ROI, the central sulcus in chimpanzees is substantially more angled than in humans. For this reason, the central point of the central fissure was chosen as a reference; slices were counted in the coronal direction, and the midpoint was set along the sulcus as a cutting point for defining the posterior and anterior temporal lobes ( y = −15). For the delineation of the pMTG ROI in the chimpanzee, we considered three alternative options of its posterior limit: 1) the posterior edge of the Sylvian fissure; 2) the descending ramus of the STS, and 3) the limit defined according to a chimpanzee brain atlas ( SI Appendix , Fig. S3 ); we chose the second option as the most suitable to reproduce human anatomy. Here, the posterior limit of the chimpanzee pMTG ROI was delineated at the descending ramus of the STS, which is an approximation of the boundary between the unimodal extrastriate cortex and the multimodal association cortex based on previous studies . Subsequently, we compared the tractograms obtained with the three alternative ROIs to test if the modifications provoke significant impact on the final results. From visual inspection, the tractograms were only minimally different (and in both hemispheres) ( SI Appendix , Fig. S4 ), which ensured that choosing ROIs according to fixed anatomical landmarks was appropriate. Statistical analyses showed that neither of the ROIs including multimodal association cortex only (options 1 and 2) showed human advantage in connectivity between pMTG and three major ventral tracts (UF, ILF, and IFOF). Moreover, the ROI extending until the limit with the unimodal extrastriate cortex (option 2, reported here) revealed that the three above-mentioned tracts showed statistically higher levels of overlap with pMTG in the chimpanzees. All remaining steps in ROIs’ transformation toward their individual diffusion space were kept the same as for humans ( SI Appendix , Fig. S2 ). Once the masks were obtained, we extracted their volume for humans and chimpanzees and weighted this measurement by the volume of the template used for their delineation (i.e., gray- and white-matter MNI template and the chimpanzee template, respectively). In humans, the masks took up the following proportions of brain template volume—pMTG: 0.018 (left) and 0.016 (right) and ATL: 0.025 (left) and 0.031 (right). In the chimpanzee, the masks occupied the following proportions—pMTG: 0.008 (left) and 0.009 (right) and ATL: 0.018 (both left and right). Mean and Overlap of the Tractograms. For both humans and chimpanzees, white-matter connections stemming from the ROIs were calculated using a probabilistic approach (FSL probtrackx) for both ROIs and both hemispheres separately. Tracking was initiated from all voxels within the seed masks to generate 10,000 streamline samples, with a curvature threshold of 0.2 and a 0.5-mm step length. The resulting connectivity maps were thresholded at 99% of the robust range and binarized. From these connectivity maps, two output images were calculated—the mean connectivity map of all the participants and the sum (overlap) of the connectivity maps of all participants, showing the per-participant overlap in tractography distributions. To better account for the interindividual variability, we present the visualization of the overlap maps in . Definition of the Canonical Ventral and Dorsal Pathways for Language. Once the white-matter tractograms related to the two seeds (pMTG and ATL) were defined for each individual (human and chimpanzee), we proceeded to define the canonical white-matter tracts with a well-established role in language: the three portions of the AF (frontoparietal, frontotemporal, and parietotemporal) , IFOF, ILF, UF, and MdLF. In humans, the tracts were defined in a semiautomated manner, inputting the ROIs defined within the MNI space and using the autoptx [now renamed XTRACT and allowing cross-species tractography ] algorithms as part of the probabilistic approach (FSL probtrackx). In order to virtually dissect the three branches of AF, three different two-ROI combinations were applied as seed and target masks. The ROIs were defined in the frontal, temporal, and parietal areas, and their combinations formed frontotemporal (also called long), frontoparietal (also called anterior), and parietotemporal (also called posterior) branches of the AF. The ROI for the frontal area was placed in the coronal plane between the central sulcus and the inferior frontal gyrus. The ROI for the temporal area was placed in the axial plane at the level of white matter descending to the posterior temporal lobe through the posterior portion of the temporal stem. The parietal ROI was defined at the sagittal plane encompassing the angular and supramarginal gyri of the inferior parietal lobe (more details are in ref. ). This process was carried out for both the left and right hemispheres. Additionally, an exclusion mask was added to the AF analyses, encompassing the midline (sagittal slice), thalami, basal ganglia, and portions of the third and lateral ventricles. Subsequently, the ROIs were adapted to the population-specific chimpanzee template informed by previous work on chimpanzee arcuate neuroanatomy . To define the ventral stream, we implemented tractography protocols used in humans and recently adapted specifically for chimpanzees for reconstructing IFOF, ILF, MdLF, and UF, which are described in detail in Bryant et al. . Briefly, the MdLF was reconstructed using seed and target masks in superior temporal gyrus (STG) white matter, with exclusion masks placed in the middle temporal gyrus (MTG), the inferior temporal gyrus (ITG), and the prefrontal cortex. For the ILF, masks were inverted from the MdLF protocol; seed and target masks were placed in the white matter within the MTG and ITG, and exclusion masks were placed in the STG as well as the hippocampal formation, amygdala, and the cerebellar peduncle. In humans, the ILF target mask was moved posteriorly to the level of the angular gyrus, with an additional axial slice in the inferior parietal lobule. IFOF protocols involved a large coronal slice in the occipital lobe for the seed, a coronal slice in the prefrontal cortex as the target, and a coronal slice with two lacunae at the extreme/external capsule as the exclusion mask. The UF protocol used the same exclusion mask as the IFOF along with an ATL seed and a target in the extreme/external capsule. A second exclusion mask was placed posterior to the basal ganglia. The advantage of defining the ROIs within the MNI space was twofold. First, it assured that the seeds were defined in the same way for every individual; second, it allowed us to reliably replicate the same steps of the analyses between the two species. After the visual inspection of the autoptx results (already corrected for the size of the seeds and densityNorm), the tracts were thresholded at 99% of the robust range and binarized (with default threshold). In chimpanzees, all the steps of the analyses were kept the same as for humans. The three portions of the AF will be further considered as representative of the dorsal stream, whereas the IFOF, ILF, UF, and MdLF will represent the ventral stream. Calculation of the Contribution of the Canonical Tracts to the Language Hubs. To define the extent of overlap between the pMTG and ATL tractograms and the canonical tracts, normalized, thresholded, and binarized pMTG and ATL tractograms (separately) were multiplied by each of the normalized, thresholded, and binarized canonical tracts. This step resulted in 14 values per participant (or chimpanzee) per hemisphere (pMTG × frontoparietal AF, pMTG × parietotemporal AF, pMTG × frontotemporal AF, pMTG × IFOF, pMTG × ILF, pMTG × UF, pMTG × MdLF, ATL × frontoparietal AF, ATL × parietotemporal AF, ATL × frontotemporal AF, ATL × IFOF, ATL × ILF, ATL × UF, ATL × MdLF). Seven additional values were extracted to represent the absolute volume of the canonical tracts to correct the measure for canonical tract size [i.e., the volume of (tractogram binary mask × canonical binary mask)/the volume of the canonical tract binary mask]. These steps resulted in a measure of the contribution of the canonical tracts to the pMTG and ATL seed-related white-matter tractograms. Throughout the manuscript, we refer to these proportions of overlap as the tract loads. Statistical Analyses. For inferential statistics, the tract loads were specified as dependent variables. Stream (dorsal vs. ventral, within subject), hemisphere (left vs. right, within subject), and species (human vs. chimpanzee, between subjects) were defined as independent variables. In addition, two streams were composed of specific tracts: the dorsal stream (three portions of the AF: frontoparietal AF, frontotemporal AF, parietotemporal AF) and the ventral stream (IFOF, ILF, UF, MdLF). First, a repeated-measures ANOVA was performed for each of the seeds separately to test if the tract loads differed as a function of hemisphere, stream, and species (including their interactions). Following up on significant interactions, repeated-measures ANOVAs were performed within each hemisphere and seed to examine if the species differed with respect to stream (dorsal vs. ventral). Finally, the contribution of each tract load to explaining the interspecies difference was quantified using 28 linear regressions (seven tracts, two hemispheres, two seeds). Adjusted R 2 values of the models were used as a measure of effect size of each tract’s ability to explain the interspecies differences (all 28 P values were corrected for family-wise error rate due to multiple comparisons using the Holm method). The analyses were performed using R studio (version 3.5.3; R Core Team 2019) and tidyverse , broom , and purrr packages. Methodological Considerations. Diffusion MRI tractography is a relatively new tool for comparative neuroscience. Although it has been criticized when compared directly with more traditional neuroscientific methods, it has shown to be replicable, and further, it has clear advantages for comparative analyses. When compared with tract tracing, diffusion tractography in ex vivo macaques found comparable results . The present investigation uses high–angular resolution data, which have been shown to perform well on difficult to reconstruct tracts like the acoustic radiation ; further, multifiber algorithms increase sensitivity . Size and scan resolution differences are important to take into account in comparative anatomical studies; this dataset is the highest-quality in vivo chimpanzee dataset available and has been previously shown to perform favorably in comparison with human and macaque datasets . Additionally, tractograms are normalized after averaging to minimize the impact of differences in brain size and resolution between the two species. Another challenge of comparative neuroanatomy is to determine whether tracts have increased or decreased in size and whether this is relative to cortex volume, white-matter volume, other tracts, or the size of functional areas. Ultimately, this is not possible to disentangle, as it is not possible to reconstruct the anatomy of the common ancestor of humans and chimpanzees. However, it is possible to directly compare extant species, making the least assumptions about structural homologies and relying on the closest direct anatomical observation. Since chimpanzees not only have a brain roughly one-third the size of humans but also, have different proportions of gray and white matter (as those scale differently from one another as brain size increases across mammals), directly comparing volumes tract by tract between humans and other primates would be unsuitable. Thus, to make the least number of assumptions is to rely on the relative sizes of tracts within species to anchor our analysis, as in previous comparative Diffusion Tensor Imaging (DTI) work . The best way to mitigate possible false positives is to use strong anatomical priors . Here, we adapted previously validated human tractography protocols to the chimpanzee using a chimpanzee white-matter atlas that, in turn, was based on strong anatomical knowledge from other species, including the macaque. The tractography procedure is the same for both species, which have similar gyrification indices and in principle, should have similar vulnerability to gyral bias . This results in a like with like comparison that is the best for comparative neuroanatomical studies and preferable to comparing different methodologies (e.g., comparing tracer and tractography data) .
High-resolution DWI data for 50 healthy human subjects (mean age = 43.7 ± 21.6 y) were acquired using a Siemens Prisma Fit 3T scanner and a 32-channel head coil at the Donders Center for Cognitive Neuroimaging, Nijmegen. Diffusion-weighted images were acquired with a simultaneous multislice diffusion‐weighted echo planar imaging (EPI) sequence. Acquisition parameters were the following: multiband factor = 3; TR (repetition time) = 2,282 ms; TE (echo time) = 71.2 ms; in-plane acceleration factor = 2; voxel size = 2 × 2 × 2 mm 3 ; nine unweighted scans; 100 diffusion-encoding gradient directions in multiple shells; b values = 1,250 and 2,500 s/mm 2 ; and Taq (total acquisition time) = 8 min, 29 s. A high-resolution T1 anatomical scan was obtained for spatial processing of the DWI data using the MP2RAGE sequence with the following parameters: 176 slices; voxel size = 1 × 1 × 1 mm 3 ; TR = 6 s; TE = 2.34 ms; and Taq = 7 min, 32 s. MP2RAGE data were processed using the Oxford Center for Functional Magnetic Resonance Imaging of the Brain (FMRIB) software library (FSL 5.0.10; https://www.fmrib.ox.ac.uk/fsl ) and skull stripped with Brain Extraction Tool (BET). DWI images were preprocessed to realign and correct for eddy current (using Statistical Parametric Mapping software - SPM12) and for artifacts from head and/or cardiac motion using robust tensor modeling [Donders Institute Diffusion Imaging toolbox ]. After preprocessing, diffusion parameters were estimated at each voxel using BedpostX. Tensor reconstruction using weighted least squares fit was performed via Diffusion Tensor Imaging fitting (DTIFit) within FMRIB’s Diffusion Toolbox (FDT) to create Diffusion Tensor Imaging (DTI scalar images, including the fractional anisotropy (FA), mean diffusivity (MD), and three eigenvalues [FSL 5.0.10 ]. This study was approved by the local ethics committee (Commissie Mensgebonden Onderzoek (CMO, i.e., the Committee for Ethics of Research on Humans) Arnhem-Nijmegen, “Imaging Human Cognition,” CMO 2014/288). Subjects provided informed consent. Diffusion-weighted data from 29 chimpanzees ( Pan troglodytes ; 28 ± 17 y) were obtained from a data archive of scans obtained prior to the 2015 implementation of US Fish and Wildlife Service and NIH regulations governing research with chimpanzees. These scans were made available through the United States–based National Chimpanzee Brain Resource. All scans reported here were completed in 2012 and have been used in previous studies (e.g., refs. and ). Chimpanzees were housed at the Yerkes National Primate Research Center (YNPRC) in Atlanta, GA; procedures were carried out in accordance with protocols approved by the YNPRC and the Emory University Institutional Animal Care and Use Committee (approval no. YER-2001206). Following standard YNPRC veterinary procedures, chimpanzee subjects were immobilized with ketamine injections (2 to 6 mg/kg intramuscular) and then, anesthetized with an intravenous propofol drip (10 mg/kg per hour) prior to scanning. Subjects remained sedated for the duration of the scans and the time necessary for transport between their home cage and the scanner location. After scanning, primates were housed in a single cage for 6 to 12 h to recover from the effects of anesthesia before being returned to their home cage and cage mates. The well-being (activity and food intake) of chimpanzees was monitored twice daily after the scan by veterinary staff for possible postanesthesia distress. Anatomical and diffusion MRI scans were acquired in a Siemens 3T Trio scanner (Siemens Medical System). A standard circularly polarized birdcage coil was used to accommodate the large chimpanzee jaw, which does not fit in the standard phase-array coil used in humans. DWI data were collected with a single-shot, spin-echo EPI sequence; to minimize eddy-current effects, a dual spin-echo technique combined with bipolar gradients was used. Parameters were as follows; 41 slices were scanned at a voxel size of 1.8 × 1.8 × 1.8 mm, TR/TE was 5,900/86 ms, and matrix size was 72 × 128. Two DWI images were acquired for each of 60 diffusion directions, each with one of the possible left–right phase-encoding directions and eight averages, allowing for correction of susceptibility-related distortion . For each average of DWI images, six images without diffusion weighting (b = 0 s/mm 2 ) were also acquired with matching imaging parameters. High-resolution T1-weighted MRI images were acquired with a three-dimensional magnetization-prepared rapid gradient-echo sequence for all subjects. T2 images were previously acquired using parameters similar to a contemporaneous study on humans . Data preprocessing was achieved using the FSL software library of the FMRIB ( https://www.fmrib.ox.ac.uk/fsl ) . T1-weighted images were skull stripped with BET with some manual correction . To correct for eddy current and susceptibility distortion, FSL’s eddy_correct and topup implemented in MATLAB (MATLAB7; MathWorks) were used. FMRIB’s Diffusion Toolbox (FDT) was used to fit diffusion tensors, estimate mean diffusivity and fractional anisotropy, and run bedpostX to fit a voxel-wise model of diffusion tensors using a crossing fiber model with three fiber directions . A modified version of the Human Connectome Project minimal preprocessing pipeline was used to create registrations to a population-specific chimpanzee template. Template generation for chimpanzees has been previously described ; briefly, the PreFreeSurfer pipeline was used to align the T1w and T2w volumes of 29 individual chimpanzees to native anterior commissure–posterior commissure space. FSL was used to perform brain extraction, cross-modal registration, bias field correction, and nonlinear volume registration to atlas space.
For human participants, two binary masks were defined within the Montreal Neurological Institute (MNI) space using the SPM Marsbar extraction tool and the Automated Anatomical Labeling (AAL) atlas: the pMTG and the ATL (for both the left and right hemispheres separately). The pMTG mask was defined by restricting the middle temporal gyrus to its portion located posteriorly to the central sulcus [ y = −18 according to the methodology proposed by Turken and Dronkers ] ( SI Appendix , Fig. S2 ). The ATL mask was obtained by joining five parts: the middle and superior temporal poles and the anterior portions of the inferior, middle, and superior temporal gyri (terminating at y = −17). Subsequently, the masks were transferred to each individual’s diffusion space, where the voxels in the mask that had a 90% probability of being present in the original mask were included. This conservative threshold of 90% was chosen to ensure that there was no overlap between the ATL and pMTG masks in the same participant. Afterward, the masks were binarized. In chimpanzees, a similar protocol was followed, with masks manually drawn in the chimpanzee template corresponding to human areas using homologous sulcal and gyral landmarks in chimpanzees using recent sulcal/gyral maps for this species . Importantly for the delineation of the ATL ROI, the central sulcus in chimpanzees is substantially more angled than in humans. For this reason, the central point of the central fissure was chosen as a reference; slices were counted in the coronal direction, and the midpoint was set along the sulcus as a cutting point for defining the posterior and anterior temporal lobes ( y = −15). For the delineation of the pMTG ROI in the chimpanzee, we considered three alternative options of its posterior limit: 1) the posterior edge of the Sylvian fissure; 2) the descending ramus of the STS, and 3) the limit defined according to a chimpanzee brain atlas ( SI Appendix , Fig. S3 ); we chose the second option as the most suitable to reproduce human anatomy. Here, the posterior limit of the chimpanzee pMTG ROI was delineated at the descending ramus of the STS, which is an approximation of the boundary between the unimodal extrastriate cortex and the multimodal association cortex based on previous studies . Subsequently, we compared the tractograms obtained with the three alternative ROIs to test if the modifications provoke significant impact on the final results. From visual inspection, the tractograms were only minimally different (and in both hemispheres) ( SI Appendix , Fig. S4 ), which ensured that choosing ROIs according to fixed anatomical landmarks was appropriate. Statistical analyses showed that neither of the ROIs including multimodal association cortex only (options 1 and 2) showed human advantage in connectivity between pMTG and three major ventral tracts (UF, ILF, and IFOF). Moreover, the ROI extending until the limit with the unimodal extrastriate cortex (option 2, reported here) revealed that the three above-mentioned tracts showed statistically higher levels of overlap with pMTG in the chimpanzees. All remaining steps in ROIs’ transformation toward their individual diffusion space were kept the same as for humans ( SI Appendix , Fig. S2 ). Once the masks were obtained, we extracted their volume for humans and chimpanzees and weighted this measurement by the volume of the template used for their delineation (i.e., gray- and white-matter MNI template and the chimpanzee template, respectively). In humans, the masks took up the following proportions of brain template volume—pMTG: 0.018 (left) and 0.016 (right) and ATL: 0.025 (left) and 0.031 (right). In the chimpanzee, the masks occupied the following proportions—pMTG: 0.008 (left) and 0.009 (right) and ATL: 0.018 (both left and right).
For both humans and chimpanzees, white-matter connections stemming from the ROIs were calculated using a probabilistic approach (FSL probtrackx) for both ROIs and both hemispheres separately. Tracking was initiated from all voxels within the seed masks to generate 10,000 streamline samples, with a curvature threshold of 0.2 and a 0.5-mm step length. The resulting connectivity maps were thresholded at 99% of the robust range and binarized. From these connectivity maps, two output images were calculated—the mean connectivity map of all the participants and the sum (overlap) of the connectivity maps of all participants, showing the per-participant overlap in tractography distributions. To better account for the interindividual variability, we present the visualization of the overlap maps in .
Once the white-matter tractograms related to the two seeds (pMTG and ATL) were defined for each individual (human and chimpanzee), we proceeded to define the canonical white-matter tracts with a well-established role in language: the three portions of the AF (frontoparietal, frontotemporal, and parietotemporal) , IFOF, ILF, UF, and MdLF. In humans, the tracts were defined in a semiautomated manner, inputting the ROIs defined within the MNI space and using the autoptx [now renamed XTRACT and allowing cross-species tractography ] algorithms as part of the probabilistic approach (FSL probtrackx). In order to virtually dissect the three branches of AF, three different two-ROI combinations were applied as seed and target masks. The ROIs were defined in the frontal, temporal, and parietal areas, and their combinations formed frontotemporal (also called long), frontoparietal (also called anterior), and parietotemporal (also called posterior) branches of the AF. The ROI for the frontal area was placed in the coronal plane between the central sulcus and the inferior frontal gyrus. The ROI for the temporal area was placed in the axial plane at the level of white matter descending to the posterior temporal lobe through the posterior portion of the temporal stem. The parietal ROI was defined at the sagittal plane encompassing the angular and supramarginal gyri of the inferior parietal lobe (more details are in ref. ). This process was carried out for both the left and right hemispheres. Additionally, an exclusion mask was added to the AF analyses, encompassing the midline (sagittal slice), thalami, basal ganglia, and portions of the third and lateral ventricles. Subsequently, the ROIs were adapted to the population-specific chimpanzee template informed by previous work on chimpanzee arcuate neuroanatomy . To define the ventral stream, we implemented tractography protocols used in humans and recently adapted specifically for chimpanzees for reconstructing IFOF, ILF, MdLF, and UF, which are described in detail in Bryant et al. . Briefly, the MdLF was reconstructed using seed and target masks in superior temporal gyrus (STG) white matter, with exclusion masks placed in the middle temporal gyrus (MTG), the inferior temporal gyrus (ITG), and the prefrontal cortex. For the ILF, masks were inverted from the MdLF protocol; seed and target masks were placed in the white matter within the MTG and ITG, and exclusion masks were placed in the STG as well as the hippocampal formation, amygdala, and the cerebellar peduncle. In humans, the ILF target mask was moved posteriorly to the level of the angular gyrus, with an additional axial slice in the inferior parietal lobule. IFOF protocols involved a large coronal slice in the occipital lobe for the seed, a coronal slice in the prefrontal cortex as the target, and a coronal slice with two lacunae at the extreme/external capsule as the exclusion mask. The UF protocol used the same exclusion mask as the IFOF along with an ATL seed and a target in the extreme/external capsule. A second exclusion mask was placed posterior to the basal ganglia. The advantage of defining the ROIs within the MNI space was twofold. First, it assured that the seeds were defined in the same way for every individual; second, it allowed us to reliably replicate the same steps of the analyses between the two species. After the visual inspection of the autoptx results (already corrected for the size of the seeds and densityNorm), the tracts were thresholded at 99% of the robust range and binarized (with default threshold). In chimpanzees, all the steps of the analyses were kept the same as for humans. The three portions of the AF will be further considered as representative of the dorsal stream, whereas the IFOF, ILF, UF, and MdLF will represent the ventral stream.
To define the extent of overlap between the pMTG and ATL tractograms and the canonical tracts, normalized, thresholded, and binarized pMTG and ATL tractograms (separately) were multiplied by each of the normalized, thresholded, and binarized canonical tracts. This step resulted in 14 values per participant (or chimpanzee) per hemisphere (pMTG × frontoparietal AF, pMTG × parietotemporal AF, pMTG × frontotemporal AF, pMTG × IFOF, pMTG × ILF, pMTG × UF, pMTG × MdLF, ATL × frontoparietal AF, ATL × parietotemporal AF, ATL × frontotemporal AF, ATL × IFOF, ATL × ILF, ATL × UF, ATL × MdLF). Seven additional values were extracted to represent the absolute volume of the canonical tracts to correct the measure for canonical tract size [i.e., the volume of (tractogram binary mask × canonical binary mask)/the volume of the canonical tract binary mask]. These steps resulted in a measure of the contribution of the canonical tracts to the pMTG and ATL seed-related white-matter tractograms. Throughout the manuscript, we refer to these proportions of overlap as the tract loads.
For inferential statistics, the tract loads were specified as dependent variables. Stream (dorsal vs. ventral, within subject), hemisphere (left vs. right, within subject), and species (human vs. chimpanzee, between subjects) were defined as independent variables. In addition, two streams were composed of specific tracts: the dorsal stream (three portions of the AF: frontoparietal AF, frontotemporal AF, parietotemporal AF) and the ventral stream (IFOF, ILF, UF, MdLF). First, a repeated-measures ANOVA was performed for each of the seeds separately to test if the tract loads differed as a function of hemisphere, stream, and species (including their interactions). Following up on significant interactions, repeated-measures ANOVAs were performed within each hemisphere and seed to examine if the species differed with respect to stream (dorsal vs. ventral). Finally, the contribution of each tract load to explaining the interspecies difference was quantified using 28 linear regressions (seven tracts, two hemispheres, two seeds). Adjusted R 2 values of the models were used as a measure of effect size of each tract’s ability to explain the interspecies differences (all 28 P values were corrected for family-wise error rate due to multiple comparisons using the Holm method). The analyses were performed using R studio (version 3.5.3; R Core Team 2019) and tidyverse , broom , and purrr packages.
Diffusion MRI tractography is a relatively new tool for comparative neuroscience. Although it has been criticized when compared directly with more traditional neuroscientific methods, it has shown to be replicable, and further, it has clear advantages for comparative analyses. When compared with tract tracing, diffusion tractography in ex vivo macaques found comparable results . The present investigation uses high–angular resolution data, which have been shown to perform well on difficult to reconstruct tracts like the acoustic radiation ; further, multifiber algorithms increase sensitivity . Size and scan resolution differences are important to take into account in comparative anatomical studies; this dataset is the highest-quality in vivo chimpanzee dataset available and has been previously shown to perform favorably in comparison with human and macaque datasets . Additionally, tractograms are normalized after averaging to minimize the impact of differences in brain size and resolution between the two species. Another challenge of comparative neuroanatomy is to determine whether tracts have increased or decreased in size and whether this is relative to cortex volume, white-matter volume, other tracts, or the size of functional areas. Ultimately, this is not possible to disentangle, as it is not possible to reconstruct the anatomy of the common ancestor of humans and chimpanzees. However, it is possible to directly compare extant species, making the least assumptions about structural homologies and relying on the closest direct anatomical observation. Since chimpanzees not only have a brain roughly one-third the size of humans but also, have different proportions of gray and white matter (as those scale differently from one another as brain size increases across mammals), directly comparing volumes tract by tract between humans and other primates would be unsuitable. Thus, to make the least number of assumptions is to rely on the relative sizes of tracts within species to anchor our analysis, as in previous comparative Diffusion Tensor Imaging (DTI) work . The best way to mitigate possible false positives is to use strong anatomical priors . Here, we adapted previously validated human tractography protocols to the chimpanzee using a chimpanzee white-matter atlas that, in turn, was based on strong anatomical knowledge from other species, including the macaque. The tractography procedure is the same for both species, which have similar gyrification indices and in principle, should have similar vulnerability to gyral bias . This results in a like with like comparison that is the best for comparative neuroanatomical studies and preferable to comparing different methodologies (e.g., comparing tracer and tractography data) .
Supplementary File
|
Toward a Better Understanding of Cardiovascular Risk in the Transgender and Gender-Diverse Community: A Global Call to Action | 98652afa-05a2-4a38-8253-4ed74201ae1c | 10906342 | Internal Medicine[mh] | Cardiovascular diseases (CVDs) are the leading cause of death globally, representing nearly 32% of deaths annually . Systemic health disparities that cause the burden of disease to be disproportionately heavier on the shoulders of minority groups have been well documented in scientific literature. Data collected from American adults through the Behavioral Risk Factor Surveillance System (BRFSS) and the All of Us research program demonstrate that gender significantly modulates the odds of developing cardiovascular health problems, and that gender non-conforming individuals are at a disproportionately high risk for developing disorders . For example, BRFSS data shows that the adjusted odds ratio (AOR) for coronary heart disease/myocardial infarction comparing transgender women to cisgender women was 2.07 (95% CI 1.37, 3.13) . There is also evidence of higher rates of risk behaviors in the transgender community. Transgender females have greater odds of engaging in heavy drinking than their cisgender counterparts (AOR 1.81, 95% CI 1.26, 2.60), transgender men are more likely to engage in no exercise than cisgender men (AOR 1.85, 95% CI 1.31, 2.62) and have higher odds of suffering from multiple chronic diseases, including type 2 diabetes and arthritis (AOR 1.88, 95% CI 1.32, 2.67) . Though the incidence of CVD in the TGD community has the potential to be mitigated with preventative medicine and lifestyle modifications that are generalizable to the broader population, they also face a unique set of sociocultural factors that warrant clinical consideration . The reasons for the increased incidence of CVD among the TGD community are broad-ranging, and the solution to this problem requires both interdisciplinary inquiry and multisectoral collaboration. This is the subject of a public call to action that was put forth by the Mexican Society of Cardiology, the Inter-American Society of Cardiology, and the World Heart Federation (WHF) on September 29, 2022 (the date of World Heart Day) . In keeping with WHF’s mission to provide cardiovascular health for all, this declaration synthesized information regarding the increased incidence of adverse cardiovascular health events in the TGD community in Latin America and served as a call-to-action for cardiovascular health-focused organizations to work to close this crucial health disparity . The objective of this report is to supplement this call to action by providing a global perspective on health disparities in TGD communities with a focus on disparities in cardiovascular health, exploring the multiple facets of this issue, and articulating some priorities for the scientific community in the process of advancing towards the goal of providing equitable healthcare for all. The social determinants of health are the social, political, economic, and environmental factors that influence an individual’s ability to maintain good health. Among the key social determinants of health are socioeconomic status, race/ethnicity, gender, and housing situation . While interpersonal rejection and financial insecurity due to these factors can predispose an individual to CVD by directly impeding access to healthcare services, the physiological stress and inflammation caused by social determinants of health can also further negative health impacts . For instance, residence in disadvantaged neighborhoods that have high poverty rates, food insecurity, and low levels of social cohesion has been found to be related to increased levels of inflammatory biomarkers such as IL-6, TNF-α, and IL-1β, the elevation of which has been linked with the development of CVD . Thus, it is important to consider the unique intersections of sociodemographic challenges faced by the transgender community in order to understand their elevated CVD risk. Survey data from a community-based participatory research study conducted in Puerto Rico revealed that 65.4% of transgender respondents had experienced public harassment due to their gender identity at least once and 55.8% had experienced intimate partner abuse . A cross-sectional study conducted among California middle and high schoolers demonstrated that transgender youth are 2.5 to 4 times more likely to develop a substance use disorder than their cisgender counterparts, and BRFSS data revealed that transgender adults are at a higher odd of suffering from depressive disorder (for transgender women, AOR 2.02, 95% CI 1.52, 2.69; for transgender men, AOR 3.14, 95% CI 2.07, 4.77) . These findings reflect the broader consensus that, relative to other demographic groups, members of the LGBTQIA+ community are predisposed to experiencing public harassment or violence due to their gender expression and are at increased risk of developing mental health complications and substance abuse disorders, all of which contribute to increased CVD risk . Another LGBTQIA+ issue that is especially significant for the TGD community is identity development. The process of creating an identity as a transgender person that is both internally and externally acceptable is extremely nuanced and is heavily dependent on the perception of others. Transgender individuals often face external pressure to conceal their identity and “pass” as cisgender, which can help them avoid discrimination but may alienate them from in-group solidarity found within the LGBTQIA+ community . These barriers to forming effective social relationships can further increase stress faced by TGD individuals, predisposing them to developing CVD. In the Transition Experience Study, the lived experience of the social and medical gender transition was examined through surveys and interviews with a sample of transmasculine individuals living in the United States of America . Residence in regions where the geopolitical climate is perceived to be progressive was associated with less stress in TGD individuals, as evidenced by linear regression models of the relationship between the progressiveness of a region and biomarkers of allostatic load (p = .001). Another protective factor that was discussed in this study is sociodemographic advantage, which is the presence of factors that are socially acceptable (i.e., non-minority race, high socioeconomic status). In line with researchers’ hypotheses, sociodemographic advantage was also negatively correlated with markers of stress . The concept that is often used to articulate the relationship between stress and adverse health outcomes in gender and ethnic minorities is the minority stress theory (MST). The Gender Minority Stress and Resilience Model visualizes MST by depicting distal stressors (i.e., gender-based discrimination) and proximal stressors (i.e., internalized transphobia), which converge and contribute to increased levels of stress in minority groups . To expand on the MST and include consideration of various intersecting minority identities, the American Heart Association has developed the Intersectional Transgender Multilevel Minority Stress model, which relates health status to the “degree of stigmatization” (i.e., someone with multiple intersecting minority social identities will face a higher degree of stigmatization than someone with only one) . This model, along with other available literature on this subject, illustrates the relationship between social stigma and cardiovascular health. Transgender individuals may face stigma at the individual level (i.e., internalized homophobia), the microsystem level (i.e., through enacted and perceived stigma in interpersonal interactions), and the macrosystem level (i.e., structural stigma) . The negative health impacts of stigmatization can be counteracted with resilience promoting factors at each level (for example, improved community connectedness at the microsystem level) . Interventions intended to address stigma beyond the clinic will likely be required to reduce the minority stress faced by the TGD community as well as the broader LGBTQIA+ community, which in turn will impact their cardiovascular health outcomes. According to the most recent ACC/AHA guidelines for the primary prevention of cardiovascular disease, three central recommendations for patient-centered prevention of CVD are increased use of team-based care (which is the collaboration of multidisciplinary healthcare professionals on each case), shared decision-making between the provider and patient, and adequate consideration of the social determinants of health in developing treatment regimens . All three of these recommendations rely on a healthcare workforce that is well-informed, free of prejudice, and easily accessible to all patients. However, this is often not the case for TGD patients. A study from the U.S. demonstrated significant disparities in access to healthcare services between sociodemographic identities. White and cisgender patients have higher rates of health insurance and are more likely to have the financial capacity to pay for medical services (P = 0.01), and transgender patients are more likely to delay seeking medical attention (P < 0.001) and report negative experiences with medical providers (P < 0.001) . These disparities result from a combination of sociodemographic factors such as income level and stigma, as well as a unique set of challenges faced by gender non-conforming individuals in the patient-provider relationship. Additionally, there are currently no standardized methods to collect information about gender identity in patient histories, thereby reducing visibility of transgender patients in the healthcare system . Along the same lines, healthcare professionals typically do not receive training on the specific needs of the TGD community, as existing medical education initiatives often focus on the LGBTQ+ community in a general sense . The attitudes of many healthcare providers are another barrier to effective healthcare for the TGD community. When nursing, health sciences, and medical students at a public university in Istanbul were surveyed using the Hudson and Ricketts Homophobia Scale, most study participants were found to exhibit medium levels of homophobia . Some transgender patients even resort to medical travel in pursuit of an accepting environment in which to undergo gender-affirming procedures . For example, a 2010 study found that it is common for transgender women to undergo gender-affirming procedures in Thailand instead of in Australia, America, and Europe due to the inclusive and respectful environment that can be found in the Thai medical community . These findings expose the general heteronormativity of the healthcare system. Homophobia is not only evident in individuals in the medical field, but it is also embedded in the system. Healthcare professionals are not taught to consider the gender identities of their patients when providing care, which often forces TGD patients to advocate for themselves to receive proper medical care. Some publications provide guidance on how medical professionals can adopt a more informed and sensitive approach to patient care in the TGD community. A narrative review by Rosendale et al. (2018) provides a brief guide to gender-inclusive medical care for clinicians and includes recommendations such as using gender-neutral pronouns until a patient specifies their gender identity and taking an inclusive “anatomic inventory” including questions about whether patients have had gender-affirming surgery . The Standards of Care for the Health of Transgender and Gender Diverse People (the most recent version, the SOC-8, was published in 2022) is a periodic publication outputted by the World Professional Association for Transgender Health (WPATH) that provides guidance for clinicians on how to care for the TGD community . Widespread integration of research such as this into the education of healthcare professionals is crucial to making the clinical setting a safe space for TGD individuals. Since the gender of a transgender individual differs from the sex that was assigned to them at birth, the incongruency between the gender they identify with and their physical characteristics can cause a type of psychological distress known as gender dysphoria. Gender-affirming hormone therapies (GHT) can mitigate these feelings by modifying a person’s physical characteristics to match their identified gender . Research regarding the relationship between GHT and cardiovascular health suggests that estrogen therapy as administered to transgender women (women who were assigned male at birth) increases their risk for venous thromboembolism over 5-fold . GHT for both transgender women and transgender men (men who were assigned female at birth) has been demonstrated to have a significant impact on blood pressure (contributing to an increase of 17.8 mmHg in transgender women and 13.4 mmHg in transgender men after two years of GHT) . However, several findings regarding the influence of GHT on cardiovascular health and risk factors are inconsistent. For example, elevated body mass index (BMI) is a risk factor for the development of CVD, and the relationship between GHT and BMI has been explored in a handful of studies . One systematic review revealed significant increases in the BMI of transgender men after they began GHT (1.3%–11.4%); however, a longitudinal study revealed no significant increases in BMI . Hormone therapy also has also been shown to lead to increased triglycerides (~21.4 mg/dL, 95% CI, 0.14–42.6), higher LDL cholesterol (17.8 mg/dL; 95% CI, 3.5–32.1), and lower HDL cholesterol (–8.5 mg/dL; 95% CI, –13 to –3.9) . However, according to 2014–2017 data from the BRFSS, there were no differences in self-reported hypercholesteremia between TGD and cisgender adults . Further research is required to generate reproducible findings in the relationship between GHT and CV health. One study investigated a potential mechanism for the relationship between hormone therapy and inflammation, as measured by systematic and endothelial biomarkers, platelet activation markers, and coagulation markers. The principal finding was that hormone therapy has been demonstrated to reduce inflammatory biomarkers (hs-CRP –66%, (95% CI –76; –53), VCAM-1–12%) and increase platelet activation markers (PF-4 +17%, (95% CI 4; 32), β-thromboglobulin +13%, (95% CI 2; 24)) . Inflammation has a potent impact on cardiovascular health, and these findings describe a potential explanation for the biological basis of the relationship between GHT and CVD . Additional research on the mechanism by which GHT impacts cardiovascular health is limited. Studies on GHT are often limited in scope and generalizability. For example, they generally only include participants younger than 50 years of age, which means that the medical community is not well-informed on the influence of GHT throughout the process of aging. Additionally, many findings on the health impacts on GHT have the potential to be confounded by factors such as the rates of mental health disorders and substance abuse in TGD populations, which can also have a significant impact on CV health . In a review analyzing the association between the route of administration of estrogen therapy and cardiovascular risk in transgender women, Miranda et al. (2022) identified a variability in estrogen formulation, dose, and treatment duration in the studies that were included . The lack of stratification based on these factors during data collection introduces potential confounding of the results . Issues that were mentioned in other papers include the need for large cohort studies and longer follow-up periods to determine the long-term impacts of GHT, and the lack of consensus on the appropriate control populations to be used in studies on TGD patients . Inclusive research endeavors are crucial for the attainment of health equity, and the lack of research regarding gender affirming hormone therapies is likely to have an impact on the quality of medical care for the TGD community. The transgender community is under-represented in cardiovascular research, which does not allow for understanding health disparities related to transgender identity. To create a more gender-inclusive medical field, more data elucidating the unique risks faced by transgender patients is required. Higher rates of adverse cardiovascular health outcomes in the TGD community can be attributed to a range of factors, including social determinants of health, structural issues with the healthcare system, side effects from gender-affirming treatments, and a lack of research on the unique needs of this community. Dismantling the web of structural violence that has led to this elevated CVD risk in the TGD community requires interdisciplinary collaboration. For example, the homophobia among pre-health students described by Harmanci Seren et al. is a microcosm of broader patterns of homophobia in their societies. Educational reform in health-oriented graduate schools is important to create a generation of medical professionals who are aware of the unique struggles of the TGD community, but social initiatives that aim to address homophobia and gender bias on a broader scale are just as significant to create a shift in the social commentary about this community. Healthcare professionals, cardiologists, and primary practitioners have the obligation to educate themselves about the transgender community and work to make their practice an inclusive and accepting environment. Additionally, to echo the calls to action in many of the studies discussed in this report, inclusive and holistic data on this subject are needed. Specifically, there is a lack of controlled clinical trials regarding the therapeutic applications and side effects of gender-affirming hormone therapy, as well as a paucity of studies intended to learn about the needs of the aging transgender community. Ultimately, an overarching goal of all this work is to build trust among the TGD community in a healthcare system that has historically been heteronormative. Meaningful educational initiatives for healthcare professionals and the broader society is imperative to building trust in the transgender community, as are research efforts that spotlight marginalized populations and make their voices heard. Creating a society with lower rates of CVD and narrower health disparities among social groups is a huge undertaking, but one that can be made possible with meaningful and intersectoral collaboration. |
Phytochemistry, Ethnopharmacological Uses, Biological Activities, and Therapeutic Applications of | ad1c77fc-bd98-49b9-9e90-e0a2e733d28e | 8538231 | Pharmacology[mh] | Cassia (family Caesalpiniaceae) is a large tropical genus with ~600 species of herbs, shrubs, and trees. Cassia obtusifolia (sicklepod) Linn., a member of the genus Cassia (Leguminosae), is a well-known traditional Chinese medicinal plant. It belongs to the medically and economically important family Leguminosae (syn. Fabaceae; subfamily Caesalpinioideae). C . obtusifolia L. is found mainly in China, Korea, India, and the western tropical regions. It is an annual semi-shrubby herb that ranges in height from ~0.5 to 2 m. It has two or three pairs of round-tipped leaflets with one to three flowers on a short axillary peduncle with pedicels up to 2 cm; the yellow petals (0.8–1.5 cm) wilt by midday. The pods are linear (up to 20 cm in length), curve gently downward, and contain numerous shiny, dark brown seeds (~0.5 cm in length). The seeds of C . obtusifolia L. are rhomboidal or slightly flat, with linear concave ramps on each side. Cassia tora L. is considered synonymous with C . obtusifolia L., but differs in its botanical and morphological characteristics . The main distinguishing morphological feature between the two is the seed coat, which is marked with an obliquely symmetrical dented line on each side of the rib ( C . obtusifolia L.) or has broad bands on both sides of the rib ( C . tora L.). Cassia species are of medicinal interest because of their therapeutic value in traditional medicine. The dry seeds are processed as a crude drug for clinical use or as a dietary supplement. The cultured plants are important sources of Semen Cassiae-derived commercial products in the market. C . obtusifolia L. seeds are a well-known medicinal plant in East Asia and are consumed as food to clear liver heat, sharpen vision, lubricate the intestines, and promote bowel movement . In Korea, dried and roasted Cassia seeds are frequently used in brewing tea. In traditional oriental and Chinese (Juemingzi in Chinese) medicine, C . obtusifolia L. has been used to treat lacrimation, headaches, dizziness, and constipation . C . obtusifolia L. has several pharmacological properties, including antiplatelet aggregation, antidiabetic, antimicrobial, anti-inflammatory, hepatoprotective, and neuroprotective activities, and may be used to treat Alzheimer’s disease, Parkinson’s disease, and cancer . It also contributes to histamine release and antiplatelet aggregation. The whole plant, as well as its roots, flowers, leaves, seeds, and pods, possesses medicinal properties. A summary of the ethnomedicinal uses of different parts of the plant is provided in . This review herein summarizes progress regarding the chemical analysis of C . obtusifolia L., primarily focusing on the development of the phytochemistry, botanical aspects, ethnopharmacological, and pharmacological effects of C . obtusifolia L. C . obtusifolia L. species are rich sources of different types of anthraquinones and naphthopyrone derivatives that exhibit a number of biological activities and may potentially impact human health. Unfortunately, C . obtusifolia L. has not been developed as a pharmaceutical agent. The main objective of this review is to present a summary of the studies published to date on this promising plant, with a solid platform to design and conduct clinical studies. This paper reviews the phytochemical and pharmacological activities of C . obtusifolia L. and discusses its potential uses as a human food source and/or a pharmacological agent.
Several classes of bioactive metabolites have been identified from C . obtusifolia L., including anthraquinones, terpenoids, flavonoids, and lipids . The main plant chemicals include anthraquinone, emodin, chrysophanol, physcion, obtusifolin, obtusin, aurantio-obtusin, chryso-obtusin, alaternin, questin, aloe-emodin, gluco-aurantio-obtusin, gluco-obtusifolin, chrysophanol-2- O -tetraglucoside, chrysophanol-2- O -triglucosides, and chryso-obtusin-2-glucoside . Other components include naphthopyrone glycosides, toralactone-9-β-gentiobioside, toralactone gentiobioside, cassiaside, rubrofusarin-6- O -gentiobiosideol, rubrofusarin-6-β-gentiobioside, cassiaside C, cassiaside B2, cassiaside C2, xanthones (1,8-dihydroxy-3-methoxy-6-methylxanthone, isogentisin, 1,7-dihydroxy-3-methylxanthone, euxanthone, 1,3,6-trihydroxy-8-methylxanthone), triterpenoids (lupeol, betulinic acid, α-amyrin, sterols, polyketide, steroids, fatty esters), and toralactone . The chemical structures of the main compounds are presented in . Research on C . obtusifolia L. reveals that the nature and number of phytochemicals vary according to climate. Researchers have found that the whole C . obtusifolia L. plant (seeds, twigs, leaves, and roots) is rich in free and bound anthraquinones, although the quantities differ markedly. In general, anthraquinone content is higher in seeds and less abundant in other components. The following section discusses the phytochemical contents of the various plant parts. 2.1. The Whole Plant Analysis of the whole C . obtusifolia L. plant indicates the presence of various anthraquinones and naphthopyrones: aloe-emodin, emodin, 1,2-dihydroxyanthraquinone, obtusin, chryso-obtusin, aurantio-obtusin, gluco-obtusifolin, gluco-aurantio-obtusin, gluco-chryso-obtusin, 1-desmethylaurantio-obtusin-2- O -β- d -glucopyranoside, 1-desmethyl-obtusin, aurantio-obtusin-6- O -β- d -glucopyranoside, 1-desmethylaurantio-obtusin, alaternin-1- O -β- d -glucopyranoside, chryso-obtusin-2- O -β- d -glucopyranoside, physicon-8- O -β- d -glucoside, obtusifolin, O -methyl-chrysophanol, emodin-1- O -β-gentio-bioside, chrysophanol-1- O -β-gentiobioside, chrysophanol-1- O -β- d -glucopyranosyl-(13)-β- d -glucopyranosyl-(1→6)-β- d -glucopyranoside, physcion-8- O -β-glucoside, 1,3-dihydroxy-8-methylanthraquinone, torosachrysone, 1-methylaurantio-obtusin-2- O -β- d -glucopyranoside, 1-desmethylchryso-obtusin, chrysophanic, acid, physcion, chrysophanol-10,10′-bianthrone, physcion-8- O -β-gentiobioside, and questin . 2.2. Seeds Cassia obtusifolia seeds are composed of 1–2% anthraquinones, 5–7% fats, 14–19% protein, and 66–99% carbohydrates . In addition to proteins and fats, the seeds also contain a gum of commercial interest . As much as 41% of the seed is extractable . Several anthraquinone compounds and glycosides have been isolated from the methanol extract of the seeds; examples include anthraquinone, chrysophanol, physcion, emodin, obtusifolin, obtusin, questin, chryso-obtusin, gluco-obtusifolin, aloe-emodin, alaternin, aurantio-obtusin, gluco-aurantio obtusin, chrysophanol tetraglucoside, 2-hydroxyemodin-1 methylether, chryso-obtusin-2-glucoside, chrysophanol triglucoside, 1,2-dihydroxyanthraquinone, 1,4-dihydroxyanthraquinone, 1,8-dihydroxyanthraquinone, 1,8-dihydroxy-3-methylanthraquinone, naphthopyrone glycoside, toralactone gentiobioside, cassiaside, and the naphthalene glycoside cassitoroside . Torosachrysone and naphthalenic lactones, isotoralactone, cassialactone, three benzyl-β-resorcylates (2-benzyl-4,6-dihydroxy benzoic acid, 2-benzyl-4,6-dihydroxy benzoic acid-6-O-β- d -glucopyranoside, and 2-benzyl-4,6-dihydroxy benzoic acid-4-O-β- d -glucopyranoside), a new sodium salt of anthraquinone (sodium emodin-1- O - β -gentiobioside), chrysophanol-1- O - β - d -glucopyranosyl-(1–3)- β - d -glucopyranosyl-(1–6)- β - d -glucopyranoside, rubrofusarin-6- O - β - d -gentiobioside, obtusifolin-2- O - β - d -glucopyranoside, aurantio-obtusin-6- O - β - d -glucopyranoside, physcion-8- O - β - d -glucopyranoside,1-hydroxyl-2-acetyl-3,8-dimethoxy-6- O - β - d -apiofuranosyl-(1–2)- β - d -glucosylnaphthalene, toralactone-9- O - β - d -gentiobioside, and rubrofusarin-6- O - β - d -apiofuranosyl-(1–6)- O - β - d -glucopyranoside have also been isolated from C . obtusifolia L. seeds . In addition, three acetylated anthraquinone glycosides (obtusifoline-2- O -β- d -2,6-di- O -acetylglucopyranoside, obtusifoline-2- O -β- d -3,6-di- O -acetylglucopyranoside, and obtusifoline-2- O - β - d -4,6-di- O -acetylglucopyranoside) have been isolated from the ethanolic extract of the seeds . Recently, Pang et al. have isolated four new compounds from the seeds of C. obtusifolia obtusifolin-2- O - β - d -(6′- O - α , β -unsaturated butyryl)-glucopyranoside, epi -9-dehydroxyeurotinone- β - d -glucopyranoside, obtusinaphthalenside A, and obtusinaphthalenside B. Feng et al. also purified various monosaccharides, and polysaccharides from the water extract of C. obtusifolia L. 2.3. Leaves The leaves of C. obtusifolia L. contain anthraquinones, xanthones, polyketide, steroids, triterpenoids, and fatty esters . The methanol extract of the leaves contains aloe emodin, emodin, 1,8-dihydroxy-3-methoxy-6-methylxantone, euxanthone, chrysophanol, physcion, 1,2,8-trihydroxy-6,7-dimethoxyanthraquinone,1,7-dihydroxy-3-methoxyxanthone,1,5-dihydroxy-3-methoxy-7-methylanthraquinone,3,7-dihydroxy-1-methoxyxanthone,1- O -methylchrysophanol, 8- O -methylchrysophanol, 1,3,6-trihydroxy-8-methylxanthone, 1-hydroxy-7-methoxy-3-methylanthraquinone, and obtusifolin. The ethyl acetate extract contains (4 R *, 5S *,6 E ,8 Z )-ethyl-4-([ E ]-but-1-enyl)-5-hydroxypentdeca-6,8-dienoate, (24 S )-24-ethylcholesta-5,22( E ),25-trien-3β-ol, -acetoxy-9,10-dimethyl-1,5-octacosanolide, friedelin, stigmasterol, lupeol, and ( E )-eicos-14-enoic acid . A single phytoalexin was isolated and purified from 12- to 14-day-old leaves . 2.4. Roots The hairy roots of C . obtusifolia L. contain betulinic acid, sitosterol, stigmasterol, anthraquinones, chrysophanol, physicon, 1-hydroxy-7-methoxy-3-methylanthraquinone, 8- O -methylchrysophanol, 1- O -methylchrysophanol, 1,2,8-trihydroxy-6,7-dimethoxyanthraquinone, emodin, iso-landicin, helminthosporin, obtusifolin, aloe-emodin, and xanthorin .
Analysis of the whole C . obtusifolia L. plant indicates the presence of various anthraquinones and naphthopyrones: aloe-emodin, emodin, 1,2-dihydroxyanthraquinone, obtusin, chryso-obtusin, aurantio-obtusin, gluco-obtusifolin, gluco-aurantio-obtusin, gluco-chryso-obtusin, 1-desmethylaurantio-obtusin-2- O -β- d -glucopyranoside, 1-desmethyl-obtusin, aurantio-obtusin-6- O -β- d -glucopyranoside, 1-desmethylaurantio-obtusin, alaternin-1- O -β- d -glucopyranoside, chryso-obtusin-2- O -β- d -glucopyranoside, physicon-8- O -β- d -glucoside, obtusifolin, O -methyl-chrysophanol, emodin-1- O -β-gentio-bioside, chrysophanol-1- O -β-gentiobioside, chrysophanol-1- O -β- d -glucopyranosyl-(13)-β- d -glucopyranosyl-(1→6)-β- d -glucopyranoside, physcion-8- O -β-glucoside, 1,3-dihydroxy-8-methylanthraquinone, torosachrysone, 1-methylaurantio-obtusin-2- O -β- d -glucopyranoside, 1-desmethylchryso-obtusin, chrysophanic, acid, physcion, chrysophanol-10,10′-bianthrone, physcion-8- O -β-gentiobioside, and questin .
Cassia obtusifolia seeds are composed of 1–2% anthraquinones, 5–7% fats, 14–19% protein, and 66–99% carbohydrates . In addition to proteins and fats, the seeds also contain a gum of commercial interest . As much as 41% of the seed is extractable . Several anthraquinone compounds and glycosides have been isolated from the methanol extract of the seeds; examples include anthraquinone, chrysophanol, physcion, emodin, obtusifolin, obtusin, questin, chryso-obtusin, gluco-obtusifolin, aloe-emodin, alaternin, aurantio-obtusin, gluco-aurantio obtusin, chrysophanol tetraglucoside, 2-hydroxyemodin-1 methylether, chryso-obtusin-2-glucoside, chrysophanol triglucoside, 1,2-dihydroxyanthraquinone, 1,4-dihydroxyanthraquinone, 1,8-dihydroxyanthraquinone, 1,8-dihydroxy-3-methylanthraquinone, naphthopyrone glycoside, toralactone gentiobioside, cassiaside, and the naphthalene glycoside cassitoroside . Torosachrysone and naphthalenic lactones, isotoralactone, cassialactone, three benzyl-β-resorcylates (2-benzyl-4,6-dihydroxy benzoic acid, 2-benzyl-4,6-dihydroxy benzoic acid-6-O-β- d -glucopyranoside, and 2-benzyl-4,6-dihydroxy benzoic acid-4-O-β- d -glucopyranoside), a new sodium salt of anthraquinone (sodium emodin-1- O - β -gentiobioside), chrysophanol-1- O - β - d -glucopyranosyl-(1–3)- β - d -glucopyranosyl-(1–6)- β - d -glucopyranoside, rubrofusarin-6- O - β - d -gentiobioside, obtusifolin-2- O - β - d -glucopyranoside, aurantio-obtusin-6- O - β - d -glucopyranoside, physcion-8- O - β - d -glucopyranoside,1-hydroxyl-2-acetyl-3,8-dimethoxy-6- O - β - d -apiofuranosyl-(1–2)- β - d -glucosylnaphthalene, toralactone-9- O - β - d -gentiobioside, and rubrofusarin-6- O - β - d -apiofuranosyl-(1–6)- O - β - d -glucopyranoside have also been isolated from C . obtusifolia L. seeds . In addition, three acetylated anthraquinone glycosides (obtusifoline-2- O -β- d -2,6-di- O -acetylglucopyranoside, obtusifoline-2- O -β- d -3,6-di- O -acetylglucopyranoside, and obtusifoline-2- O - β - d -4,6-di- O -acetylglucopyranoside) have been isolated from the ethanolic extract of the seeds . Recently, Pang et al. have isolated four new compounds from the seeds of C. obtusifolia obtusifolin-2- O - β - d -(6′- O - α , β -unsaturated butyryl)-glucopyranoside, epi -9-dehydroxyeurotinone- β - d -glucopyranoside, obtusinaphthalenside A, and obtusinaphthalenside B. Feng et al. also purified various monosaccharides, and polysaccharides from the water extract of C. obtusifolia L.
The leaves of C. obtusifolia L. contain anthraquinones, xanthones, polyketide, steroids, triterpenoids, and fatty esters . The methanol extract of the leaves contains aloe emodin, emodin, 1,8-dihydroxy-3-methoxy-6-methylxantone, euxanthone, chrysophanol, physcion, 1,2,8-trihydroxy-6,7-dimethoxyanthraquinone,1,7-dihydroxy-3-methoxyxanthone,1,5-dihydroxy-3-methoxy-7-methylanthraquinone,3,7-dihydroxy-1-methoxyxanthone,1- O -methylchrysophanol, 8- O -methylchrysophanol, 1,3,6-trihydroxy-8-methylxanthone, 1-hydroxy-7-methoxy-3-methylanthraquinone, and obtusifolin. The ethyl acetate extract contains (4 R *, 5S *,6 E ,8 Z )-ethyl-4-([ E ]-but-1-enyl)-5-hydroxypentdeca-6,8-dienoate, (24 S )-24-ethylcholesta-5,22( E ),25-trien-3β-ol, -acetoxy-9,10-dimethyl-1,5-octacosanolide, friedelin, stigmasterol, lupeol, and ( E )-eicos-14-enoic acid . A single phytoalexin was isolated and purified from 12- to 14-day-old leaves .
The hairy roots of C . obtusifolia L. contain betulinic acid, sitosterol, stigmasterol, anthraquinones, chrysophanol, physicon, 1-hydroxy-7-methoxy-3-methylanthraquinone, 8- O -methylchrysophanol, 1- O -methylchrysophanol, 1,2,8-trihydroxy-6,7-dimethoxyanthraquinone, emodin, iso-landicin, helminthosporin, obtusifolin, aloe-emodin, and xanthorin .
Numerous researchers have investigated the pharmacological activities of various C . obtusifolia L. extracts. summarizes the pharmacological features that have been observed. They include: antidiabetic, anti-inflammatory, antimicrobial, antioxidant, hepatoprotective, neuroprotective, immune-modulatory, anti-Parkinson’s disease, anti-Alzheimer’s disease, and larvicidal properties. The anthraquinones and naphthopyrones isolated from C . obtusifolia L. are structurally diverse and exhibit multiple pharmacological properties, which suggests that these compounds contribute to its therapeutic effects . C . obtusifolia L. and its major constituents display a vast number of biological activities . Natural products are highly promising sources for antioxidant and anti-inflammatory agents. A wide range of bioactive constituents of plants have antioxidant and anti-inflammatory activities. Based on various assay methods and activity indices, antioxidant or anti-inflammatory activities and nutraceutical and therapeutic effects of traditional Chinese medicines as well as the mechanisms underlying such activities and effects have been investigated. The generation of free radicals can result in damage to the cellular machinery. The seeds of C. obtusifolia L. are widely used in Chinese folk medicine and have been demonstrated to exhibit significant antioxidant and anti-inflammatory. Over the past century, natural products, especially anthraquinone compounds, have become valuable products for achieving chemical diversity in the molecules used for inflammation relief. In addition, COE has traditionally been used in Korea to treat eye inflammation, photophobia, and lacrimation. 3.1. Neuroprotective Activity Various studies have demonstrated the direct neuroprotective activities of the C . obtusifolia L. seed extract (COE) and its major constituents (anthraquinones). More detailed studies are required to clarify the compositional features and neuroprotective activities of the anthraquinones. The ethanolic COE (25, 50, or 100 mg/kg) ameliorates scopolamine or bilateral common carotid artery occlusion (2VO)-induced memory impairment by inhibiting acetylcholinesterase . COE (10 or 50 mg/kg/day) reduced memory impairment and neuronal damage caused by 2VO in a mouse model of transient global ischemia; it was suggested that the neuroprotective effects of COE are attributable to its anti-inflammatory properties resulting in decreased expression of inducible nitric oxide synthase (iNOX) and cyclooxygenase-2 (COX-2) and increased expression of the neurotrophic factors pCREB and BDNF . Alaternin, the active compound in C . obtusifolia L., exhibits neuroprotective activity after transient cerebral hypoperfusion induced by bilateral common carotid artery occlusion. Administration of alaternin (10 mg/kg) prevented or reduced nitrotyrosine and lipid peroxidation, bilateral common carotid artery occlusion (BCCAO)-induced iNOS expression, and microglial activation . Drever et al. reported that ethanolic COE is neuroprotective against NMDA-induced calcium dysregulation and 3-nitropropionic acid-induced cell death in mouse hippocampal cultures. Recently, Paudel et al. also reported that four major compounds (cassiaside, rubrofusarin gentiobioside, aurantio-obtusin, and 2-hydroxyemodin 1-methylether) exhibited neuroprotective effects; among them, aurantio-obtusin showed promising neuroprotective effects via targeting various G-protein-coupled receptors and transient brain ischemia/reperfusion injury C57BL/6 mice model. 3.1.1. Anti-Alzheimer’s Disease Activity The effects of the ethanolic extract of COE in Aβ-induced anti-Alzheimer’s disease (anti-AD) models have been reported. The mechanism of COE ameliorated Aβ-induced LTP impairment in acute hippocampal slices and prevented Aβ-induced GSK-3β activation . Moreover, COE prevented microglial activation as well as iNOS and COX activation induced by Aβ in the hippocampus, and in vivo studies have indicated that COE ameliorated Aβ-induced object recognition memory impairment . Two anthraquinones from C . obtusifolia L., obtusifolin and gluco-obtusifolin, improved scopolamine-induced learning and memory impairment in mice based on the passive avoidance and Morris water maze tests . Obtusifolin (0.25, 0.5, and 2 mg/kg) and gluco-obtusifolin (1, 2, and 4 mg/kg) significantly reversed scopolamine-induced cognitive impairment on the passive avoidance test; obtusifolin (0.5 mg/kg) and gluco-obtusifolin (2 mg/kg) improved escape latencies, swimming times in the target quadrant, and crossing numbers in the zone where the platform previously existed on the Morris water maze test . The anti-AD properties of COE may be attributed to its constituents, such as anthraquinones and naphthopyrone glycosides. The methanolic seed extract and its solvent-soluble fractions from C . obtusifolia L. were tested for their acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) inhibitory activities using Elman’s method. Ethyl acetate and butanol fractions significantly inhibited AChE activity at a final concentration of 100 µg/mL, with IC 50 values of 9.45 ± 0.44 and 9.87 ± 0.70 μg/mL, respectively. Butanol (IC 50 = 7.58 ± 0.51 μg/mL) and ethyl acetate (IC 50 = 16.09 ± 0.16 μg/mL) fractions exhibited potent inhibitory activity against BChE. Furthermore, butanol fraction (IC 50 = 26.19 ± 0.72 μg/mL) significantly inhibited the β-secretase (BACE1) activity . In addition, several anthraquinones (emodin, chrysophanol, physcion, obtusifolin, alaternin, questin, aloe-emodin) that displayed strong anti-AD activity by inhibiting AChE, BChE, and BACE1 enzymes were isolated from this plant . Recently, Shrestha et al. observed anti-AD effects of naphthopyrone and its glycosides including rubrofusarin, rubrofusarin 6- O -β- d -glucopyranoside, rubrofusarin 6- O -β- d -gentiobioside, nor-rubrofusarin 6- O -β- d -glucoside, isorubrofusarin 10- O -β- d -gentiobioside, and rubrofusarin 6- O -β- d -triglucoside by inhibiting AChE, BChE, and BACE1 enzymes. The use of AChE, BChE, and BACE1 inhibitors has been a promising treatment strategy for AD; therefore, C . obtusifolia may be an effective agent for treating AD. 3.1.2. Prevention and Treatment of Parkinson’s Disease A neuroprotective effect of COE was observed in both in vitro and in vivo models of Parkinson’s disease . In PC12 cells, COE reduced cell damage induced by 100 µM 6-hydroxydopamine and inhibited the overproduction of reactive oxygen species, glutathione depletion, mitochondrial membrane depolarization, and caspase-3 activation at 0.1 to 10 µg/mL. In addition, COE displayed radical scavenging effects in DPPH and ABTS assays, which suggests that COE may be useful for treating Parkinson’s disease . 3.2. Hepatoprotective Activity Few studies have demonstrated the hepatoprotective activities of COE . Further studies are required to establish the hepatoprotective mechanisms of major COE anthraquinones. The protective effects of ethanolic COE against the cytotoxicity induced by CCl 4 liver in mice were evaluated by assessing aminotransferase activities, histopathological changes, hepatic and mitochondrial antioxidant indices, and cytochrome P450 2E1(CYP2E1) activity. Administration of COE (0.5, 1, 2 g/kg) markedly reduced ALT and AST release, Ca 2+ -induced mitochondria membrane permeability transition, and CYP2E1 activity. In addition, COE significantly reduced hepatic and mitochondrial malondialdehyde levels, increased hepatic and mitochondrial glutathione levels, and restored superoxide dismutase, glutathione reductase, and glutathione S-transferase activities . Meng et al. reported the hepatoprotective effects of ethanolic COE on non-alcoholic fatty liver disease (NAFLD). Administration of COE (0.5, 1, 2 g/kg) markedly reduced the levels of AST, ALT, TG, TC, TNF-a, IL-6, IL-8, and MDA. COE treatments also increased the levels of SOD, GSH, and the expression of LDL-R mRNA . Seo et al. observed hepatoprotective effects of ethanolic COE and its components (e.g., toralactone glycoside) in t -BHP-induced cell death in HepG2 cells. Cassia anthraquinones, aurantio-obtusin, and obtusifolin also protected against tacrine-induced cytotoxicity in HepG2 cells . Recently, Ali et al. investigated the hepatoprotective effects of different soluble fractions of methanolic derived COE and its active components in t -BHP-induced oxidative stress in HepG2 cells. The possible mechanism was that alaternin, aloe emodin, and cassiaside potently scavenge ROS in t -BHP-induced HepG2 cells and the decrease in ROS generation parallels the up-regulation of glutathione (GSH). Very recently, Paudel et al. investigated the hepatoprotective activity of an anthraquinone (1-desmethylaurantio-obtusin 2- O - β - d -glucopyranoside) and two naphthopyrone glycosides (rubrofusarin 6- O -β- d -apiofuranosyl-(1→6)- O - β - d- glucopyranoside and rubrofusarin 6- O -β-gentiobioside) isolated from the butanol fraction of COE in the t -BHP-induced oxidative stress in HepG2 cells through up-regulated HO-1 via the nuclear factor erythroid-2-related factor 2 (Nrf2) activation and modulation of the JNK/ERK/MAPK signaling pathway. 3.3. Anti-Inflammatory and Antioxidant Activity COE has traditionally been used in Korea to treat eye inflammation, photophobia, and lacrimation. Pretreatment with the aqueous extract of C . obtusifolia L. inhibited interleukin (IL)-6 and cyclooxygenase-2 (COX-2) and reduced the activation of transcription nuclear factor-kB p65 in colon tissues treated with dextran sulfate sodium . Two major anthraquinones from C . obtusifolia , obtusifolin and gluco-obtusifolin, reduced neuropathic and inflammatory pain . Pro-inflammatory cytokines (e.g., TNF- α , IL-1 β , IL-6) and activation of NF-kB have been strongly implicated in the initiation and development of inflammatory and neuropathic pain, and the administration of obtusifolin and gluco-obtusifolin (1 and 2 mg/kg) significantly inhibited this upregulation. This finding suggests that obtusifolin and gluco-obtusifolin inhibited the overexpression of spinal TNF-α, IL-1β, IL-6, and NF-κB p65 associated with inflammatory and neuropathic pain, which involves the regulation of neuroinflammatory processes and the neuroimmune system . In another study, water-extracted polysaccharides (CP) from the whole seeds of C . obtusifolia L. and its two subfractions CP-30 and CP-40 were obtained. CP, CP-30, and CP-40 possessed immunomodulation activity by promoting phagocytosis and stimulating the production of nitric oxide (NO) and cytokines TNF-α and IL-6 . Methanolic COE was investigated for antioxidant and health-relevant functionality. The extract exhibited 1292 mM Fe[II] per 1 mg/mL extract of antioxidant power, 49.92% inhibition of β-carotene degradation, 65.79% of scavenging activity against DPPH, and 50.78% of superoxide radicals (at a concentration 1 mg/mL). These antioxidant properties may be attributed to the total free phenolic content of the raw seeds, which was 13.33 ± 1.73 g catechin equivalent/100 g extract . Recently, Kwon et al. investigated the anti-inflammatory activity of major anthraquinone derivatives; among them, aurantio-obtusin inhibited iNOS expression without affecting iNOS enzyme activity and down-regulation mechanisms included interruption of the JNK/IKK/NF-κB activation and proinflammatory cytokine production from the lung-related cells. Additionally, aurantio-obtusin also dose-dependently (10 and 100 mg/kg) inhibited the inflammatory responses in a mouse model of airway inflammation, LPS-induced acute lung injury. Very recently, Hou et al. reported anti-inflammatory activity by decreasing the production of NO, PGE2, and inhibiting iNOS, COX-2, TNF-α, and IL-6. Additionally, there was a reduction in the LPS-induced activation of nuclear factor-κB in RAW264.7 cells . 3.4. Antimicrobial Activity Because many bacterial and fungal strains are resistant to a wide variety of antibiotics, medicinal plants have been studied for their potential antimicrobial properties. COE was active against several different microbes ( Bifidobacterium adolescentis , B . bifidum , B . longum , B . breve , Clostridium perfringens , Escherichia coli , Lactobacillus casei ). Isolated 1,2-dihydroxyanthraquinone strongly inhibited the growth of C . perfringens and E . coli and promoted the growth of B . bifidum . The C . obtusifolia L. leaf extract in petroleum ether and chloroform showed sensitivity against E . faecalis (minimal inhibitory concentration [MIC] 0.2725 mg/mL), whereas ethanol extracts showed sensitivity against A . fumigatus (MIC 0.3116 mg/mL). Similarly, stem extracts of C . obtusifolia L. in petroleum ether showed sensitivity against E . faecalis (MIC 0.407 mg/mL), ethanol extracts showed sensitivity against E . faecalis (MIC 0.3009 mg/mL), and chloroform extracts showed sensitivity against E . faecalis MIC 0.4946 mg/mL . The whole plant extract of C . obtusifolia significantly inhibited the growth of Staphyloccocus aureus MRSA8 (MIC 64 μg/mL), E . coli AG100 (MIC 256 μg/mL), Pseudomonas aeruginosa PA01 (MIC 256 μg/mL), Enterobacter aerogenes EA289 (MIC 289 μg/mL), and Klebsiella pneumoniae KP55 MIC 256 μg/mL . Phytoalexin 2-(phydroxyphenoxy)-5,7-dihydroxychromone isolated from C . obtusifolia L. exhibited strong antifungal activity . The C . obtusifolia L. root extract and its constituents exhibited strong antibacterial activity. Emodin, 2,5-dimethoxybenzoquione, questin, isotoralactone, and toralactone exhibited strong antibacterial activity against S . aureus 209P (MICs 4.5, 19, 25, and 3 µg/mL, respectively) and E. coli NIHJ MICs 25, 50, 50, 12, and 5.5 µg/mL, respectively . 3.5. Antidiabetic Activity Two key enzymes, protein tyrosine phosphatase 1B (PTP1B) and α-glucosidase, are effective in treating diabetes mellitus. The effects of methanolic COE revealed inhibitory activities against PTP1B and α-glucosidase. Out of 15 anthraquinones from the extract, compounds with alaternin, physcion, chrysophanol, emodin, obtusin, questin, chryso-obtusin, aurantio-obtusin, 2-hydroxyemodin-1 methylether, gluco-obtusifolin, gluco-aurantio obtusin, and naphthalene glycoside aloe-emodin exhibited the highest inhibitory activities against PTP1B and α-glucosidase in vitro . The effects of alaternin and emodin on the stimulation of glucose uptake by insulin-resistant human HepG2 cells were examined at concentrations ranging from 12.5 to 50 µM and 3.12 to 12.5 µM, respectively. In another study, five new anthraquinones were isolated from ethanol seed extracts of C . obtusifolia L. and evaluated for their antidiabetic activities through the inhibition of α-glucosidase in vitro . Obtusifolin isolated from C . obtusifolia L. may have an antihyperlipidemic effect; an intraperitoneal obtusifolin injection reduced blood lipid levels in streptozotocin-induced diabetic rats . Results from another study indicated that oral administration of obtusifolin significantly reversed the changes induced by hyperlipidemia in body weight, total cholesterol, triglycerides, low-density lipoprotein cholesterol, and high-density lipoprotein cholesterol; increased serum superoxide dismutase, and nitric oxide, and reduced malondialdehyde . Recently, two new naphthalenic lactone glycosides(3 S )-9,10-dihydroxy-7-methoxy-3-methyl-1-oxo-3,4-dihydro-1H-benzo[g]isochromene-3-carboxylic acid 9- O -β- d -glucopyranoside and (3 R )-cassialactone 9- O -β- d -glucopyranoside were isolated from seeds of C. obtusifolia L. that showed significant inhibitory activities against the formation of advanced glycation end-products (AGEs) with IC 50 values of 11.63 and 23.40 µM, respectively . 3.6. Antiplatelet Aggregation Inhibitory Activity Ethanolic COE and three major anthraquinones (aurantio-obtusin, chryso-obtusin, and emodin) demonstrated inhibitory activity against ADP (adenosine 5′-diphosphate), arachidonic acid (AA), or collagen-induced platelet aggregation . Methanolic COE and different solvent soluble fractions, including normal butanol ( n -BuOH) and dicloromethane (CH 2 Cl 2 ), exhibited antiplatelet aggregation activities. Furthermore, 17 anthraquinones, including gluco-obtusifolin, gluco-aurantio-obtusin, obtusifolin, and gluco-chryso-obtusin, were identified as active antiplatelet aggregation components . 3.7. Anticancer Activity Polysaccharide COB1B1S2 and its sulfated derivative COB1B1S2-Sul were isolated from an alkaline COE. Human hepatocellular carcinoma cell lines Bel7402, SMMC7721, and Huh7, as well as HT-29 and Caco-2, were used to evaluate the anticancer effects of COB1B1S2 and COB1B1S2-Sul . COB1B1S2 had a weak inhibitory effect on Bel7402, Huh7, HT-29, as well as Caco-2 cells. By contrast, COB1B1S2-Sul significantly inhibited the growth of all cell lines, particularly Bel7402 cells at 250 µg/mL; the inhibition ratio was 61.7% . Three acetylated benzyl-beta-resorcylate glycosides (2-benzyl-4,6-dihydroxy benzoic acid-6-O-[2,6- O -diacetyl]- d -glucopyranoside, 2-benzyl-4,6-dihydroxy benzoic acid-6- O -[3,6- O -diacetyl]- d -glucopyranoside, and 2-benzyl-4,6-dihydroxy benzoic acid-6- O -[4,6- O -diacetyl]- d -glucopyranoside) were isolated from seeds of C . obtusifolia and exhibited significant cytotoxicity against a human hepatoblastoma cell line, with IC 50 values of 4.6, 5.0, and 4.3 µg/mL, respectively . In addition, 12 compounds were isolated from seeds of obtusifolia and their anticancer activities evaluated in multiple cancer cell lines . 8-Hydroxy-1,7-dimethoxy-3-methylanthracene-9,10-dione-2- O -β- d -glucoside was active against HCT-116, A549, HepG2, SGC7901, and LO2 cell lines, with IC 50 values of 4.5, 7.6, 22.8, 20.7, and 18.1 µg/mL, respectively. 6,8-Dihydroxy-1,7-dimethoxy-3-methylanthracene-9,10-dione-2- O -β- d -glucoside was only weakly active against HCT-116 (IC 50 , 43.0 µg/mL). 1-Desmethylobtusin had moderate cytotoxicity against HCT-116, A549, and SGC7901cell lines, with IC 50 values of 5.1, 10, and 25.4 µg/mL, respectively. Chryso-obtusin showed significant cytotoxic activity against HCT-116, A549, SGC7901, and LO2 cell lines, with IC 50 values of 10.5 to 15.8 µg/mL. Obtusin was moderately active against HCT-116, A549, and SGC7901 cell lines, with IC 50 values of 13.1, 29.2, and 15.2 µg/mL, respectively. Aurantio-obtusin was moderately active against HCT-116, A549, SGC7901, and LO2 cell lines, with IC 50 values of 18.9 to 22.0 µg/mL. Chryso-obtusin-2-O-β- d -glucopyranoside was selectively cytotoxic against HCT-116, A549, HepG2, SGC7901, and LO2 cell lines, with IC 50 values of 5.8 to 14.6 µg/mL. Finally, aurantio-obtusin-6-O-β- d -glucopyranoside was weakly cytotoxic against HCT-116 and SGC7901, with IC 50 values of 31.1 and 23.3 µg/mL, respectively . 3.8. Larvicidal Activity The larvicidal activity of methanol COE against early fourth-stage larvae of Aedes aegypti and Culex pipiens pallens was investigated . At 200 ppm, extracts of C . obtusifolia L. caused more than 90% mortality in larvae of Ae . aegypti and Cx . pipiens pallens . At 40 ppm, extracts of C . obtusifolia L. caused 51.4% and 68.5% mortality in fourth-stage larvae of Ae . aegypti and Cx . pipiens pallens , respectively. Larvicidal activity of C . obtusifolia extract at 20 ppm was significantly reduced . In another study, COE obtained in different fractions showed mosquito larvicidal activity against fourth instar larvae of A . aegypti , Aedes togoi , and Cx . pipiens pallens . However, the chloroform fraction of C . obtusifolia extracts exhibited a strong larvicidal activity of 100% mortality (at a concentration 25 mg/L), and the isolated active compound emodin showed strong larvicidal activity, with LC 50 values of 1.4, 1.9, and 2.2 mg/L against C . pipiens pallens , A . aegypti , and A . togoi , respectively . The ethanolic leaf extract of C . obtusifolia L. was also investigated for larvicidal and oviposition deterrence effects against late third instar larvae of Anopheles stephensi . Extracts from the leaf displayed significant larvicidal activity, with LC 50 and LC 90 values of 52.2 and 108.7 mg/L, respectively (at concentrations of 25 mg/L). In addition, the oviposition study indicated that different concentrations of leaf extract greatly reduced the number of eggs deposited by gravid A . stephensi . At concentrations of 100, 200, 300, and 400 mg/L, the maximum percentages of effective repellency against oviposition were 75.5%, 83.0%, 87.2%, and 92.5%, respectively . 3.9. Other Activities The methanol extract of C . obtusifolia L. and its isolated naphthopyrones cassiaside B2 and cassiaside C2 inhibited histamine release from rat peritoneal exudate mast cells induced by antigen–antibody reaction . The anti-angiogenic activity of two polysaccharides, COB1B1S2 and COB1B1S2-Sul, from C . obtusifolia L. seeds was evaluated by tube formation of HMEC-1 cells on Matrigel. COB1B1S2 at 50 or 100 µg/mL did not impair tube formation, but COB1B1S2-Sul at 50 or 100 µg/mL significantly disrupted tube formation; even at 50 µg/mL, COB1B1S2-Sul could potentially completely inhibit tube formation in HMEC-1 cells . Water-soluble polysaccharides (WSPs) from C . obtusifolia L. (pectic polysaccharides and hemicellulose) were identified. These WSPs reduced pancreatic α-amylase activity by 20.5% and 28.9% (at concentrations of 20 and 80 mg/mL, respectively), reduced pancreatic lipase activity by about 18.9% (at a concentration of 80 mg/mL), and increased protease activity 5- to 7-fold (at concentrations of 20 and 80 mg/mL, respectively). These WSPs were also able to bind bile acids and reduce the amount of cholesterol available for absorption . The simultaneous determination and pharmacokinetic study of seven anthraquinones (chrysophanol, emodin, aloe-emodin, rhein, physcion, obtusifolin, and aurantio-obtusin) in rat plasma after oral administration of C . obtusifolia L. extract was investigated and may help to explain the bioactivity and clinical applications of C . obtusifolia L. . The effects of COE and its anthraquinones on muscle mitochondrial function were evaluated in vivo in rats and in vitro using mitochondrial energy metabolism models. The organic extract of C . obtusifolia L. and emodin significantly inhibited NADH: cytochrome c oxidoreductase activity of bovine heart mitochondrial particles and NADH: coenzyme Q oxidoreductase activity of porcine heart mitochondrial NADH dehydrogenase and exhibited protective effects of coenzyme Q against enzyme inhibition by anthraquinones . Inhibition of trypsin activity by C . obtusifolia L. seeds was investigated . A Kunitz-type trypsin inhibitor showed strong resistance against the midgut trypsin-like protease of Pieris rapae . In addition, a trypsin inhibitor gene ( CoTI1 ) was isolated from C . obtusifolia L. and exhibited dominant inhibitory activities against trypsin and trypsin-like proteases from Helicoverpa armigera , Spodoptera exigua , and Spodoptera litura . Moreover, Dong et al. , has been also reported that Cassia semen ( C. obtusifolia and C. tora ) and its major constituents possesses a wide spectrum of pharmacological properties.
Various studies have demonstrated the direct neuroprotective activities of the C . obtusifolia L. seed extract (COE) and its major constituents (anthraquinones). More detailed studies are required to clarify the compositional features and neuroprotective activities of the anthraquinones. The ethanolic COE (25, 50, or 100 mg/kg) ameliorates scopolamine or bilateral common carotid artery occlusion (2VO)-induced memory impairment by inhibiting acetylcholinesterase . COE (10 or 50 mg/kg/day) reduced memory impairment and neuronal damage caused by 2VO in a mouse model of transient global ischemia; it was suggested that the neuroprotective effects of COE are attributable to its anti-inflammatory properties resulting in decreased expression of inducible nitric oxide synthase (iNOX) and cyclooxygenase-2 (COX-2) and increased expression of the neurotrophic factors pCREB and BDNF . Alaternin, the active compound in C . obtusifolia L., exhibits neuroprotective activity after transient cerebral hypoperfusion induced by bilateral common carotid artery occlusion. Administration of alaternin (10 mg/kg) prevented or reduced nitrotyrosine and lipid peroxidation, bilateral common carotid artery occlusion (BCCAO)-induced iNOS expression, and microglial activation . Drever et al. reported that ethanolic COE is neuroprotective against NMDA-induced calcium dysregulation and 3-nitropropionic acid-induced cell death in mouse hippocampal cultures. Recently, Paudel et al. also reported that four major compounds (cassiaside, rubrofusarin gentiobioside, aurantio-obtusin, and 2-hydroxyemodin 1-methylether) exhibited neuroprotective effects; among them, aurantio-obtusin showed promising neuroprotective effects via targeting various G-protein-coupled receptors and transient brain ischemia/reperfusion injury C57BL/6 mice model. 3.1.1. Anti-Alzheimer’s Disease Activity The effects of the ethanolic extract of COE in Aβ-induced anti-Alzheimer’s disease (anti-AD) models have been reported. The mechanism of COE ameliorated Aβ-induced LTP impairment in acute hippocampal slices and prevented Aβ-induced GSK-3β activation . Moreover, COE prevented microglial activation as well as iNOS and COX activation induced by Aβ in the hippocampus, and in vivo studies have indicated that COE ameliorated Aβ-induced object recognition memory impairment . Two anthraquinones from C . obtusifolia L., obtusifolin and gluco-obtusifolin, improved scopolamine-induced learning and memory impairment in mice based on the passive avoidance and Morris water maze tests . Obtusifolin (0.25, 0.5, and 2 mg/kg) and gluco-obtusifolin (1, 2, and 4 mg/kg) significantly reversed scopolamine-induced cognitive impairment on the passive avoidance test; obtusifolin (0.5 mg/kg) and gluco-obtusifolin (2 mg/kg) improved escape latencies, swimming times in the target quadrant, and crossing numbers in the zone where the platform previously existed on the Morris water maze test . The anti-AD properties of COE may be attributed to its constituents, such as anthraquinones and naphthopyrone glycosides. The methanolic seed extract and its solvent-soluble fractions from C . obtusifolia L. were tested for their acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) inhibitory activities using Elman’s method. Ethyl acetate and butanol fractions significantly inhibited AChE activity at a final concentration of 100 µg/mL, with IC 50 values of 9.45 ± 0.44 and 9.87 ± 0.70 μg/mL, respectively. Butanol (IC 50 = 7.58 ± 0.51 μg/mL) and ethyl acetate (IC 50 = 16.09 ± 0.16 μg/mL) fractions exhibited potent inhibitory activity against BChE. Furthermore, butanol fraction (IC 50 = 26.19 ± 0.72 μg/mL) significantly inhibited the β-secretase (BACE1) activity . In addition, several anthraquinones (emodin, chrysophanol, physcion, obtusifolin, alaternin, questin, aloe-emodin) that displayed strong anti-AD activity by inhibiting AChE, BChE, and BACE1 enzymes were isolated from this plant . Recently, Shrestha et al. observed anti-AD effects of naphthopyrone and its glycosides including rubrofusarin, rubrofusarin 6- O -β- d -glucopyranoside, rubrofusarin 6- O -β- d -gentiobioside, nor-rubrofusarin 6- O -β- d -glucoside, isorubrofusarin 10- O -β- d -gentiobioside, and rubrofusarin 6- O -β- d -triglucoside by inhibiting AChE, BChE, and BACE1 enzymes. The use of AChE, BChE, and BACE1 inhibitors has been a promising treatment strategy for AD; therefore, C . obtusifolia may be an effective agent for treating AD. 3.1.2. Prevention and Treatment of Parkinson’s Disease A neuroprotective effect of COE was observed in both in vitro and in vivo models of Parkinson’s disease . In PC12 cells, COE reduced cell damage induced by 100 µM 6-hydroxydopamine and inhibited the overproduction of reactive oxygen species, glutathione depletion, mitochondrial membrane depolarization, and caspase-3 activation at 0.1 to 10 µg/mL. In addition, COE displayed radical scavenging effects in DPPH and ABTS assays, which suggests that COE may be useful for treating Parkinson’s disease .
The effects of the ethanolic extract of COE in Aβ-induced anti-Alzheimer’s disease (anti-AD) models have been reported. The mechanism of COE ameliorated Aβ-induced LTP impairment in acute hippocampal slices and prevented Aβ-induced GSK-3β activation . Moreover, COE prevented microglial activation as well as iNOS and COX activation induced by Aβ in the hippocampus, and in vivo studies have indicated that COE ameliorated Aβ-induced object recognition memory impairment . Two anthraquinones from C . obtusifolia L., obtusifolin and gluco-obtusifolin, improved scopolamine-induced learning and memory impairment in mice based on the passive avoidance and Morris water maze tests . Obtusifolin (0.25, 0.5, and 2 mg/kg) and gluco-obtusifolin (1, 2, and 4 mg/kg) significantly reversed scopolamine-induced cognitive impairment on the passive avoidance test; obtusifolin (0.5 mg/kg) and gluco-obtusifolin (2 mg/kg) improved escape latencies, swimming times in the target quadrant, and crossing numbers in the zone where the platform previously existed on the Morris water maze test . The anti-AD properties of COE may be attributed to its constituents, such as anthraquinones and naphthopyrone glycosides. The methanolic seed extract and its solvent-soluble fractions from C . obtusifolia L. were tested for their acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) inhibitory activities using Elman’s method. Ethyl acetate and butanol fractions significantly inhibited AChE activity at a final concentration of 100 µg/mL, with IC 50 values of 9.45 ± 0.44 and 9.87 ± 0.70 μg/mL, respectively. Butanol (IC 50 = 7.58 ± 0.51 μg/mL) and ethyl acetate (IC 50 = 16.09 ± 0.16 μg/mL) fractions exhibited potent inhibitory activity against BChE. Furthermore, butanol fraction (IC 50 = 26.19 ± 0.72 μg/mL) significantly inhibited the β-secretase (BACE1) activity . In addition, several anthraquinones (emodin, chrysophanol, physcion, obtusifolin, alaternin, questin, aloe-emodin) that displayed strong anti-AD activity by inhibiting AChE, BChE, and BACE1 enzymes were isolated from this plant . Recently, Shrestha et al. observed anti-AD effects of naphthopyrone and its glycosides including rubrofusarin, rubrofusarin 6- O -β- d -glucopyranoside, rubrofusarin 6- O -β- d -gentiobioside, nor-rubrofusarin 6- O -β- d -glucoside, isorubrofusarin 10- O -β- d -gentiobioside, and rubrofusarin 6- O -β- d -triglucoside by inhibiting AChE, BChE, and BACE1 enzymes. The use of AChE, BChE, and BACE1 inhibitors has been a promising treatment strategy for AD; therefore, C . obtusifolia may be an effective agent for treating AD.
A neuroprotective effect of COE was observed in both in vitro and in vivo models of Parkinson’s disease . In PC12 cells, COE reduced cell damage induced by 100 µM 6-hydroxydopamine and inhibited the overproduction of reactive oxygen species, glutathione depletion, mitochondrial membrane depolarization, and caspase-3 activation at 0.1 to 10 µg/mL. In addition, COE displayed radical scavenging effects in DPPH and ABTS assays, which suggests that COE may be useful for treating Parkinson’s disease .
Few studies have demonstrated the hepatoprotective activities of COE . Further studies are required to establish the hepatoprotective mechanisms of major COE anthraquinones. The protective effects of ethanolic COE against the cytotoxicity induced by CCl 4 liver in mice were evaluated by assessing aminotransferase activities, histopathological changes, hepatic and mitochondrial antioxidant indices, and cytochrome P450 2E1(CYP2E1) activity. Administration of COE (0.5, 1, 2 g/kg) markedly reduced ALT and AST release, Ca 2+ -induced mitochondria membrane permeability transition, and CYP2E1 activity. In addition, COE significantly reduced hepatic and mitochondrial malondialdehyde levels, increased hepatic and mitochondrial glutathione levels, and restored superoxide dismutase, glutathione reductase, and glutathione S-transferase activities . Meng et al. reported the hepatoprotective effects of ethanolic COE on non-alcoholic fatty liver disease (NAFLD). Administration of COE (0.5, 1, 2 g/kg) markedly reduced the levels of AST, ALT, TG, TC, TNF-a, IL-6, IL-8, and MDA. COE treatments also increased the levels of SOD, GSH, and the expression of LDL-R mRNA . Seo et al. observed hepatoprotective effects of ethanolic COE and its components (e.g., toralactone glycoside) in t -BHP-induced cell death in HepG2 cells. Cassia anthraquinones, aurantio-obtusin, and obtusifolin also protected against tacrine-induced cytotoxicity in HepG2 cells . Recently, Ali et al. investigated the hepatoprotective effects of different soluble fractions of methanolic derived COE and its active components in t -BHP-induced oxidative stress in HepG2 cells. The possible mechanism was that alaternin, aloe emodin, and cassiaside potently scavenge ROS in t -BHP-induced HepG2 cells and the decrease in ROS generation parallels the up-regulation of glutathione (GSH). Very recently, Paudel et al. investigated the hepatoprotective activity of an anthraquinone (1-desmethylaurantio-obtusin 2- O - β - d -glucopyranoside) and two naphthopyrone glycosides (rubrofusarin 6- O -β- d -apiofuranosyl-(1→6)- O - β - d- glucopyranoside and rubrofusarin 6- O -β-gentiobioside) isolated from the butanol fraction of COE in the t -BHP-induced oxidative stress in HepG2 cells through up-regulated HO-1 via the nuclear factor erythroid-2-related factor 2 (Nrf2) activation and modulation of the JNK/ERK/MAPK signaling pathway.
COE has traditionally been used in Korea to treat eye inflammation, photophobia, and lacrimation. Pretreatment with the aqueous extract of C . obtusifolia L. inhibited interleukin (IL)-6 and cyclooxygenase-2 (COX-2) and reduced the activation of transcription nuclear factor-kB p65 in colon tissues treated with dextran sulfate sodium . Two major anthraquinones from C . obtusifolia , obtusifolin and gluco-obtusifolin, reduced neuropathic and inflammatory pain . Pro-inflammatory cytokines (e.g., TNF- α , IL-1 β , IL-6) and activation of NF-kB have been strongly implicated in the initiation and development of inflammatory and neuropathic pain, and the administration of obtusifolin and gluco-obtusifolin (1 and 2 mg/kg) significantly inhibited this upregulation. This finding suggests that obtusifolin and gluco-obtusifolin inhibited the overexpression of spinal TNF-α, IL-1β, IL-6, and NF-κB p65 associated with inflammatory and neuropathic pain, which involves the regulation of neuroinflammatory processes and the neuroimmune system . In another study, water-extracted polysaccharides (CP) from the whole seeds of C . obtusifolia L. and its two subfractions CP-30 and CP-40 were obtained. CP, CP-30, and CP-40 possessed immunomodulation activity by promoting phagocytosis and stimulating the production of nitric oxide (NO) and cytokines TNF-α and IL-6 . Methanolic COE was investigated for antioxidant and health-relevant functionality. The extract exhibited 1292 mM Fe[II] per 1 mg/mL extract of antioxidant power, 49.92% inhibition of β-carotene degradation, 65.79% of scavenging activity against DPPH, and 50.78% of superoxide radicals (at a concentration 1 mg/mL). These antioxidant properties may be attributed to the total free phenolic content of the raw seeds, which was 13.33 ± 1.73 g catechin equivalent/100 g extract . Recently, Kwon et al. investigated the anti-inflammatory activity of major anthraquinone derivatives; among them, aurantio-obtusin inhibited iNOS expression without affecting iNOS enzyme activity and down-regulation mechanisms included interruption of the JNK/IKK/NF-κB activation and proinflammatory cytokine production from the lung-related cells. Additionally, aurantio-obtusin also dose-dependently (10 and 100 mg/kg) inhibited the inflammatory responses in a mouse model of airway inflammation, LPS-induced acute lung injury. Very recently, Hou et al. reported anti-inflammatory activity by decreasing the production of NO, PGE2, and inhibiting iNOS, COX-2, TNF-α, and IL-6. Additionally, there was a reduction in the LPS-induced activation of nuclear factor-κB in RAW264.7 cells .
Because many bacterial and fungal strains are resistant to a wide variety of antibiotics, medicinal plants have been studied for their potential antimicrobial properties. COE was active against several different microbes ( Bifidobacterium adolescentis , B . bifidum , B . longum , B . breve , Clostridium perfringens , Escherichia coli , Lactobacillus casei ). Isolated 1,2-dihydroxyanthraquinone strongly inhibited the growth of C . perfringens and E . coli and promoted the growth of B . bifidum . The C . obtusifolia L. leaf extract in petroleum ether and chloroform showed sensitivity against E . faecalis (minimal inhibitory concentration [MIC] 0.2725 mg/mL), whereas ethanol extracts showed sensitivity against A . fumigatus (MIC 0.3116 mg/mL). Similarly, stem extracts of C . obtusifolia L. in petroleum ether showed sensitivity against E . faecalis (MIC 0.407 mg/mL), ethanol extracts showed sensitivity against E . faecalis (MIC 0.3009 mg/mL), and chloroform extracts showed sensitivity against E . faecalis MIC 0.4946 mg/mL . The whole plant extract of C . obtusifolia significantly inhibited the growth of Staphyloccocus aureus MRSA8 (MIC 64 μg/mL), E . coli AG100 (MIC 256 μg/mL), Pseudomonas aeruginosa PA01 (MIC 256 μg/mL), Enterobacter aerogenes EA289 (MIC 289 μg/mL), and Klebsiella pneumoniae KP55 MIC 256 μg/mL . Phytoalexin 2-(phydroxyphenoxy)-5,7-dihydroxychromone isolated from C . obtusifolia L. exhibited strong antifungal activity . The C . obtusifolia L. root extract and its constituents exhibited strong antibacterial activity. Emodin, 2,5-dimethoxybenzoquione, questin, isotoralactone, and toralactone exhibited strong antibacterial activity against S . aureus 209P (MICs 4.5, 19, 25, and 3 µg/mL, respectively) and E. coli NIHJ MICs 25, 50, 50, 12, and 5.5 µg/mL, respectively .
Two key enzymes, protein tyrosine phosphatase 1B (PTP1B) and α-glucosidase, are effective in treating diabetes mellitus. The effects of methanolic COE revealed inhibitory activities against PTP1B and α-glucosidase. Out of 15 anthraquinones from the extract, compounds with alaternin, physcion, chrysophanol, emodin, obtusin, questin, chryso-obtusin, aurantio-obtusin, 2-hydroxyemodin-1 methylether, gluco-obtusifolin, gluco-aurantio obtusin, and naphthalene glycoside aloe-emodin exhibited the highest inhibitory activities against PTP1B and α-glucosidase in vitro . The effects of alaternin and emodin on the stimulation of glucose uptake by insulin-resistant human HepG2 cells were examined at concentrations ranging from 12.5 to 50 µM and 3.12 to 12.5 µM, respectively. In another study, five new anthraquinones were isolated from ethanol seed extracts of C . obtusifolia L. and evaluated for their antidiabetic activities through the inhibition of α-glucosidase in vitro . Obtusifolin isolated from C . obtusifolia L. may have an antihyperlipidemic effect; an intraperitoneal obtusifolin injection reduced blood lipid levels in streptozotocin-induced diabetic rats . Results from another study indicated that oral administration of obtusifolin significantly reversed the changes induced by hyperlipidemia in body weight, total cholesterol, triglycerides, low-density lipoprotein cholesterol, and high-density lipoprotein cholesterol; increased serum superoxide dismutase, and nitric oxide, and reduced malondialdehyde . Recently, two new naphthalenic lactone glycosides(3 S )-9,10-dihydroxy-7-methoxy-3-methyl-1-oxo-3,4-dihydro-1H-benzo[g]isochromene-3-carboxylic acid 9- O -β- d -glucopyranoside and (3 R )-cassialactone 9- O -β- d -glucopyranoside were isolated from seeds of C. obtusifolia L. that showed significant inhibitory activities against the formation of advanced glycation end-products (AGEs) with IC 50 values of 11.63 and 23.40 µM, respectively .
Ethanolic COE and three major anthraquinones (aurantio-obtusin, chryso-obtusin, and emodin) demonstrated inhibitory activity against ADP (adenosine 5′-diphosphate), arachidonic acid (AA), or collagen-induced platelet aggregation . Methanolic COE and different solvent soluble fractions, including normal butanol ( n -BuOH) and dicloromethane (CH 2 Cl 2 ), exhibited antiplatelet aggregation activities. Furthermore, 17 anthraquinones, including gluco-obtusifolin, gluco-aurantio-obtusin, obtusifolin, and gluco-chryso-obtusin, were identified as active antiplatelet aggregation components .
Polysaccharide COB1B1S2 and its sulfated derivative COB1B1S2-Sul were isolated from an alkaline COE. Human hepatocellular carcinoma cell lines Bel7402, SMMC7721, and Huh7, as well as HT-29 and Caco-2, were used to evaluate the anticancer effects of COB1B1S2 and COB1B1S2-Sul . COB1B1S2 had a weak inhibitory effect on Bel7402, Huh7, HT-29, as well as Caco-2 cells. By contrast, COB1B1S2-Sul significantly inhibited the growth of all cell lines, particularly Bel7402 cells at 250 µg/mL; the inhibition ratio was 61.7% . Three acetylated benzyl-beta-resorcylate glycosides (2-benzyl-4,6-dihydroxy benzoic acid-6-O-[2,6- O -diacetyl]- d -glucopyranoside, 2-benzyl-4,6-dihydroxy benzoic acid-6- O -[3,6- O -diacetyl]- d -glucopyranoside, and 2-benzyl-4,6-dihydroxy benzoic acid-6- O -[4,6- O -diacetyl]- d -glucopyranoside) were isolated from seeds of C . obtusifolia and exhibited significant cytotoxicity against a human hepatoblastoma cell line, with IC 50 values of 4.6, 5.0, and 4.3 µg/mL, respectively . In addition, 12 compounds were isolated from seeds of obtusifolia and their anticancer activities evaluated in multiple cancer cell lines . 8-Hydroxy-1,7-dimethoxy-3-methylanthracene-9,10-dione-2- O -β- d -glucoside was active against HCT-116, A549, HepG2, SGC7901, and LO2 cell lines, with IC 50 values of 4.5, 7.6, 22.8, 20.7, and 18.1 µg/mL, respectively. 6,8-Dihydroxy-1,7-dimethoxy-3-methylanthracene-9,10-dione-2- O -β- d -glucoside was only weakly active against HCT-116 (IC 50 , 43.0 µg/mL). 1-Desmethylobtusin had moderate cytotoxicity against HCT-116, A549, and SGC7901cell lines, with IC 50 values of 5.1, 10, and 25.4 µg/mL, respectively. Chryso-obtusin showed significant cytotoxic activity against HCT-116, A549, SGC7901, and LO2 cell lines, with IC 50 values of 10.5 to 15.8 µg/mL. Obtusin was moderately active against HCT-116, A549, and SGC7901 cell lines, with IC 50 values of 13.1, 29.2, and 15.2 µg/mL, respectively. Aurantio-obtusin was moderately active against HCT-116, A549, SGC7901, and LO2 cell lines, with IC 50 values of 18.9 to 22.0 µg/mL. Chryso-obtusin-2-O-β- d -glucopyranoside was selectively cytotoxic against HCT-116, A549, HepG2, SGC7901, and LO2 cell lines, with IC 50 values of 5.8 to 14.6 µg/mL. Finally, aurantio-obtusin-6-O-β- d -glucopyranoside was weakly cytotoxic against HCT-116 and SGC7901, with IC 50 values of 31.1 and 23.3 µg/mL, respectively .
The larvicidal activity of methanol COE against early fourth-stage larvae of Aedes aegypti and Culex pipiens pallens was investigated . At 200 ppm, extracts of C . obtusifolia L. caused more than 90% mortality in larvae of Ae . aegypti and Cx . pipiens pallens . At 40 ppm, extracts of C . obtusifolia L. caused 51.4% and 68.5% mortality in fourth-stage larvae of Ae . aegypti and Cx . pipiens pallens , respectively. Larvicidal activity of C . obtusifolia extract at 20 ppm was significantly reduced . In another study, COE obtained in different fractions showed mosquito larvicidal activity against fourth instar larvae of A . aegypti , Aedes togoi , and Cx . pipiens pallens . However, the chloroform fraction of C . obtusifolia extracts exhibited a strong larvicidal activity of 100% mortality (at a concentration 25 mg/L), and the isolated active compound emodin showed strong larvicidal activity, with LC 50 values of 1.4, 1.9, and 2.2 mg/L against C . pipiens pallens , A . aegypti , and A . togoi , respectively . The ethanolic leaf extract of C . obtusifolia L. was also investigated for larvicidal and oviposition deterrence effects against late third instar larvae of Anopheles stephensi . Extracts from the leaf displayed significant larvicidal activity, with LC 50 and LC 90 values of 52.2 and 108.7 mg/L, respectively (at concentrations of 25 mg/L). In addition, the oviposition study indicated that different concentrations of leaf extract greatly reduced the number of eggs deposited by gravid A . stephensi . At concentrations of 100, 200, 300, and 400 mg/L, the maximum percentages of effective repellency against oviposition were 75.5%, 83.0%, 87.2%, and 92.5%, respectively .
The methanol extract of C . obtusifolia L. and its isolated naphthopyrones cassiaside B2 and cassiaside C2 inhibited histamine release from rat peritoneal exudate mast cells induced by antigen–antibody reaction . The anti-angiogenic activity of two polysaccharides, COB1B1S2 and COB1B1S2-Sul, from C . obtusifolia L. seeds was evaluated by tube formation of HMEC-1 cells on Matrigel. COB1B1S2 at 50 or 100 µg/mL did not impair tube formation, but COB1B1S2-Sul at 50 or 100 µg/mL significantly disrupted tube formation; even at 50 µg/mL, COB1B1S2-Sul could potentially completely inhibit tube formation in HMEC-1 cells . Water-soluble polysaccharides (WSPs) from C . obtusifolia L. (pectic polysaccharides and hemicellulose) were identified. These WSPs reduced pancreatic α-amylase activity by 20.5% and 28.9% (at concentrations of 20 and 80 mg/mL, respectively), reduced pancreatic lipase activity by about 18.9% (at a concentration of 80 mg/mL), and increased protease activity 5- to 7-fold (at concentrations of 20 and 80 mg/mL, respectively). These WSPs were also able to bind bile acids and reduce the amount of cholesterol available for absorption . The simultaneous determination and pharmacokinetic study of seven anthraquinones (chrysophanol, emodin, aloe-emodin, rhein, physcion, obtusifolin, and aurantio-obtusin) in rat plasma after oral administration of C . obtusifolia L. extract was investigated and may help to explain the bioactivity and clinical applications of C . obtusifolia L. . The effects of COE and its anthraquinones on muscle mitochondrial function were evaluated in vivo in rats and in vitro using mitochondrial energy metabolism models. The organic extract of C . obtusifolia L. and emodin significantly inhibited NADH: cytochrome c oxidoreductase activity of bovine heart mitochondrial particles and NADH: coenzyme Q oxidoreductase activity of porcine heart mitochondrial NADH dehydrogenase and exhibited protective effects of coenzyme Q against enzyme inhibition by anthraquinones . Inhibition of trypsin activity by C . obtusifolia L. seeds was investigated . A Kunitz-type trypsin inhibitor showed strong resistance against the midgut trypsin-like protease of Pieris rapae . In addition, a trypsin inhibitor gene ( CoTI1 ) was isolated from C . obtusifolia L. and exhibited dominant inhibitory activities against trypsin and trypsin-like proteases from Helicoverpa armigera , Spodoptera exigua , and Spodoptera litura . Moreover, Dong et al. , has been also reported that Cassia semen ( C. obtusifolia and C. tora ) and its major constituents possesses a wide spectrum of pharmacological properties.
As presented in this review, pharmacological studies on C . obtusifolia L. and its putative active compounds, especially anthraquinones and naphthopyrone, support that several biological activities of C . obtusifolia can potentially impact human health. Anthraquinones and naphthopyrone can be effectively isolated and purified from C . obtusifolia seeds, leaves, root and its whole plant with various extraction analytical methods, mainly separation-based methods using TLC, HPLC, high-speed counter-current chromatography (HSCCC), and column chromatography (silica gel, reverse-phase, and Sephadex). The semi-shrubby herb C . obtusifolia L., which belongs to the family Leguminosae, has gained popularity because of its medicinal and historical importance. It has been widely used in traditional medicine to treat headaches, dizziness, dysentery, and eye disease. In addition, C . obtusifolia L. is important to the food industry and possesses a wide spectrum of pharmacological properties (e.g., anti-allergic, antidiabetic, anti-inflammatory, antimicrobial, antioxidant, hepatoprotective, neuroprotective, anti-Alzheimer’s disease, antiplatelet aggregation, and larvicidal activities) that are associated with its diverse chemical constituents (e.g., anthraquinones, naphthopyrone, terpenoid, flavonoid, polysaccharides, and lipids). The number of modern studies on bioactive compounds is increasing in biomedicine, suggesting that these compounds might have great medical significance in the future. Although the bioactivities of seed extracts or compounds isolated from C . obtusifolia L. have been substantiated using in vitro and in vivo studies, the mechanisms of action remain unknown. Thus, there are still opportunities and challenges for research of seed extracts or compounds. Therefore, additional studies are required before C . obtusifolia L. and its components can be considered for further clinical use. In conclusion, C . obtusifolia L. is an edible medicinal plant that is important to the food industry and has a wide range of potential pharmacological uses. This review presents a summary of studies published to date on this promising plant.
|
Prognostic value of remnant-like particle cholesterol in ischemic heart failure patients following percutaneous coronary intervention | fe64d847-d65c-4cd3-9502-80ec567d02f5 | 11792126 | Surgical Procedures, Operative[mh] | Introduction Heart failure (HF), a condition estimated to impact over 37.7 million individuals worldwide , is increasingly afflicting people globally. With the ongoing global demographic shift toward an aging population, the anticipated surge in the burden of heart failure is poised to be considerable in the forthcoming years . In patients with ischemic heart disease (IHD), myocardial ischemia or infarction due to the accumulation of atherosclerotic plaques in the epicardial arteries subsequently leads to impaired left ventricular function, serving as the principal etiology of heart failure . Despite undergoing percutaneous coronary intervention (PCI), patients with ischemic heart failure (IHF) continue to face poor prognoses and high mortality rates, as demonstrated by several retrospective cohort studies from Anzhen Hospital, which underscore the ongoing challenges in improving survival outcomes for this population . Atherosclerosis and coronary heart disease are closely linked to dyslipidemia . Previous research suggests that remnant-like particle cholesterol (RLP-C), akin to low-density lipoprotein cholesterol (LDL-C), plays a role in the development and advancement of coronary atherosclerosis . Mendelian randomized studies have demonstrated a strong correlation between elevated RLP-C levels and an increased risk of IHD, including myocardial infarction . Incorporating RLP-C levels into the assessment of IHD risk, alongside traditional factors like total cholesterol (TC) and high-density lipoprotein cholesterol (HDL-C) levels, has been revealed in recent studies as a potential strategy. This approach holds promise in directing the evaluation of cardiovascular event risk and facilitating the appropriate selection of individuals for statin therapy, ultimately alleviating the burden of IHD . Given this evidence, the current study hypothesized that RLP-C may influence the prognosis of IHF patients. To date, no investigations have been undertaken to explore the prognostic implications of RLP-C in patients with IHF undergoing PCI.
Method 2.1. Study population The population selection criteria for this retrospective cohort study, conducted at Beijing Anzhen Hospital, involved the enrollment of 2036 adult patients with IHF who underwent elective PCI. To be included in the study, patients had to meet the following specific inclusion criteria : (1) A confirmed diagnosis of IHF according to the International Classification of Diseases, 10th revision (ICD-10), which includes I50.106 (left ventricular failure), I50.001 (congestive heart failure), I50.902 (cardiac insufficiency), I50.919 (diastolic heart failure), I50.905 (chronic heart failure), or I50.911 (heart failure, unspecified). (2) Concomitant multivessel disease (MVD), defined as coronary artery stenosis exceeding 50% in at least two vessels or left main artery disease. Exclusion criteria included a history of coronary artery bypass grafting (CABG), malignancies potentially influencing long-term survival, or patients lost to follow-up. Additionally, individuals with a left ventricular ejection fraction (LVEF) of 50% or higher, missing lipid data, acute myocardial infarction (MI), or those who refused to provide informed consent were also excluded. In total, 2036 eligible patients meeting the specified criteria were included in the final analysis . 2.2. Data collection and definitions The present study’s data extraction process entailed accessing Beijing Anzhen Hospital’s electronic medical record system. A spectrum of hospitalization-generated data was extracted for analysis(Detailed information can be found in the supplement). The assessment of lesion characteristics was conducted by two cardiologists, ensuring accuracy and consistency. The lesion characteristics of the coronary artery were defined as follows : LM disease: an angiographically estimated stenosis >50% or a fractional flow reserve <0.80 in the left main coronary artery ostium, mid-shaft, or distal bifurcation. Three-vessel disease: more than two main coronary branches (vessel diameter ≥ 2 mm) with extent of stenosis ≥ 50%. Chronic total occlusion lesion: a lesion with complete obstruction [thrombolysis in myocardial infarction (TIMI) flow grade 0] lasting longer than 3 months, which was judged from the previous medical history or coronary angiogram results. Diffuse lesion: a single stenotic lesion with a length of ≥ 20 mm. In-stent restenosis: stenosis of ≥ 50% occurring within the stent, quantified by the synergy between PCI with taxus and cardiac surgery (SYNTAX) score. PCI procedures followed the guidelines established by the Chinese medical community . SYNTAX scores were calculated for each participant using the SYNTAX score algorithm online . 2.3. Follow‑up Regular follow-up assessments were conducted at specific intervals following the PCI at baseline. Major adverse cardiovascular events (MACE) information was gathered via telephonic surveys, outpatient consultations, and cross-referenced with medical records for verification purposes. 2.4. Grouping and outcomes Patients were categorized by RLP-C tertiles in this study. RLP-C levels were determined using the formula TC minus LDL-C and HDL-C, in accordance with lipid guidelines . The principal outcome was MACE, which encompassed all-cause mortality, non-fatal MI, and any revascularization. MI was defined according to the fourth universal definition . Secondary outcomes comprised individual MACE components. In case of multiple adverse events during follow-up, the most severe outcome was prioritized for analysis, with hierarchy: all-cause mortality > non-fatal MI > any revascularization. In cases where a single event occurred multiple times, only the first occurrence was considered for analysis. The follow-up period was extended until June 2022. 2.5. Statistical analysis For normally distributed variables, the ANOVA test was employed, presenting means with standard deviations (SD), while non-normally distributed variables underwent analysis using the Kruskal-Wallis test, displaying medians alongside the 25th and 75th percentiles. Categorical variables were evaluated utilizing the Chi-squared test, presenting numbers and percentages to highlight group differences. To ascertain adverse event incidence rates among RLP-C tertile groups, Kaplan-Meier survival analyses were conducted, accompanied by log-rank tests for intergroup comparisons. Cox proportional hazards regression analysis was employed to calculate HR and 95% CI for both principal and secondary outcomes, with variable selection for multiple regression based on clinical judgment or univariate analysis ( p < 0.05). Three adjustment models were employed, gradually incorporating additional variables (Detailed information can be found in the supplement). In addition to the primary analysis, RLP-C was further categorized using a cutoff value of 0.8 mmol/L, which is commonly used in the Chinese population as the upper limit of the normal range. This categorical variable was then analyzed using Cox proportional hazards regression models to assess its association with MACE and other clinical outcomes. Subgroup analyses were carried out to evaluate the impact of RLP-C on MACE within different subgroups, and P-values for interaction were calculated. Additionally, an RCS model was used to explore the nonlinear correlation between RLP-C and principal outcome, with the variables incorporated in this model aligning with those in the third adjustment model. The selection of knot numbers was based on the lowest Akaike information criterion, resulting in the choice of three knots for analysis. The statistical analyses utilized Stata software and R software. A level of p < 0.05 was deemed statistically significant.
Study population The population selection criteria for this retrospective cohort study, conducted at Beijing Anzhen Hospital, involved the enrollment of 2036 adult patients with IHF who underwent elective PCI. To be included in the study, patients had to meet the following specific inclusion criteria : (1) A confirmed diagnosis of IHF according to the International Classification of Diseases, 10th revision (ICD-10), which includes I50.106 (left ventricular failure), I50.001 (congestive heart failure), I50.902 (cardiac insufficiency), I50.919 (diastolic heart failure), I50.905 (chronic heart failure), or I50.911 (heart failure, unspecified). (2) Concomitant multivessel disease (MVD), defined as coronary artery stenosis exceeding 50% in at least two vessels or left main artery disease. Exclusion criteria included a history of coronary artery bypass grafting (CABG), malignancies potentially influencing long-term survival, or patients lost to follow-up. Additionally, individuals with a left ventricular ejection fraction (LVEF) of 50% or higher, missing lipid data, acute myocardial infarction (MI), or those who refused to provide informed consent were also excluded. In total, 2036 eligible patients meeting the specified criteria were included in the final analysis .
Data collection and definitions The present study’s data extraction process entailed accessing Beijing Anzhen Hospital’s electronic medical record system. A spectrum of hospitalization-generated data was extracted for analysis(Detailed information can be found in the supplement). The assessment of lesion characteristics was conducted by two cardiologists, ensuring accuracy and consistency. The lesion characteristics of the coronary artery were defined as follows : LM disease: an angiographically estimated stenosis >50% or a fractional flow reserve <0.80 in the left main coronary artery ostium, mid-shaft, or distal bifurcation. Three-vessel disease: more than two main coronary branches (vessel diameter ≥ 2 mm) with extent of stenosis ≥ 50%. Chronic total occlusion lesion: a lesion with complete obstruction [thrombolysis in myocardial infarction (TIMI) flow grade 0] lasting longer than 3 months, which was judged from the previous medical history or coronary angiogram results. Diffuse lesion: a single stenotic lesion with a length of ≥ 20 mm. In-stent restenosis: stenosis of ≥ 50% occurring within the stent, quantified by the synergy between PCI with taxus and cardiac surgery (SYNTAX) score. PCI procedures followed the guidelines established by the Chinese medical community . SYNTAX scores were calculated for each participant using the SYNTAX score algorithm online .
Follow‑up Regular follow-up assessments were conducted at specific intervals following the PCI at baseline. Major adverse cardiovascular events (MACE) information was gathered via telephonic surveys, outpatient consultations, and cross-referenced with medical records for verification purposes.
Grouping and outcomes Patients were categorized by RLP-C tertiles in this study. RLP-C levels were determined using the formula TC minus LDL-C and HDL-C, in accordance with lipid guidelines . The principal outcome was MACE, which encompassed all-cause mortality, non-fatal MI, and any revascularization. MI was defined according to the fourth universal definition . Secondary outcomes comprised individual MACE components. In case of multiple adverse events during follow-up, the most severe outcome was prioritized for analysis, with hierarchy: all-cause mortality > non-fatal MI > any revascularization. In cases where a single event occurred multiple times, only the first occurrence was considered for analysis. The follow-up period was extended until June 2022.
Statistical analysis For normally distributed variables, the ANOVA test was employed, presenting means with standard deviations (SD), while non-normally distributed variables underwent analysis using the Kruskal-Wallis test, displaying medians alongside the 25th and 75th percentiles. Categorical variables were evaluated utilizing the Chi-squared test, presenting numbers and percentages to highlight group differences. To ascertain adverse event incidence rates among RLP-C tertile groups, Kaplan-Meier survival analyses were conducted, accompanied by log-rank tests for intergroup comparisons. Cox proportional hazards regression analysis was employed to calculate HR and 95% CI for both principal and secondary outcomes, with variable selection for multiple regression based on clinical judgment or univariate analysis ( p < 0.05). Three adjustment models were employed, gradually incorporating additional variables (Detailed information can be found in the supplement). In addition to the primary analysis, RLP-C was further categorized using a cutoff value of 0.8 mmol/L, which is commonly used in the Chinese population as the upper limit of the normal range. This categorical variable was then analyzed using Cox proportional hazards regression models to assess its association with MACE and other clinical outcomes. Subgroup analyses were carried out to evaluate the impact of RLP-C on MACE within different subgroups, and P-values for interaction were calculated. Additionally, an RCS model was used to explore the nonlinear correlation between RLP-C and principal outcome, with the variables incorporated in this model aligning with those in the third adjustment model. The selection of knot numbers was based on the lowest Akaike information criterion, resulting in the choice of three knots for analysis. The statistical analyses utilized Stata software and R software. A level of p < 0.05 was deemed statistically significant.
Results 3.1. Subjects and baseline characteristics The study included 2036 participants, and presented their baseline characteristics. Higher RLP-C tertiles were associated with younger age, higher diastolic blood pressure, heart rate, body mass index, comorbidities like diabetes and renal insufficiency, and less atrial fibrillation. Red blood cell count, FBG, triglyceride, albumin, creatinine, TC, LDL-C, uric acid, HbA1c, and RLP-C increased as RLP-C tertiles rose, while AST, eGFR, and HDL-C decreased. Patients with higher RLP-C tertiles received more loop diuretics, spironolactone, sacubitril/valsartan, and alpha-glucosidase inhibitors. Angiographic data revealed that higher RLP-C levels were associated with a higher prevalence of three-vessel disease and higher SYNTAX scores. 3.2. Associations between the RLP-C and outcomes The study found that the incidence rate of MACE was 33.9% . Higher RLP-C tertiles were associated with a significant increase in MACE, with Tertile 3 showing a higher rate compared to Tertile 1 (49.7% vs 22.3%, p < 0.001). Additionally, the rates of all-cause mortality and any revascularization significantly increased with higher RLP-C tertiles ( p < 0.001). Survival curves confirmed these associations , demonstrating a significant increase in principal and secondary outcomes as RLP-C tertiles increased (Log-rank, p < 0.05 for all). These findings are illustrated in . The associations between RLP-C and the outcomes were examined using Cox regression models . Higher tertiles of RLP-C showed an independent association with an increased risk of principal and secondary outcomes in an unadjusted model. When RLP-C was analyzed as a continuous variable, a similar correlation with increased risk was observed. After adjusting for age, sex, and additional confounding variables, the highest tertile of RLP-C remained significantly associated with the highest risk of MACE (Tertile 3 vs Tertile 1: HR, 95% CI: 2.57, 2.03-3.26; p < 0.001, P for trend <0.001), all-cause mortality(HR, 95% CI: 3.14, 2.20-4.47; p < 0.001, P for trend < 0.001), and any revascularization (HR, 95% CI: 2.27, 1.59-3.25; p < 0.001, P for trend < 0.001). The associations of RLP-C and MACE (HR, 95% CI: 1.50, 1.15-1.98; p = 0.003) or all-cause mortality (HR, 95% CI: 1.77, 1.22-2.57; p = 0.003) remained consistent when RLP-C was analyzed as a continuous variable. However, no significant correlation was found between RLP-C and non-fatal MI or any revascularization. In a secondary analysis, RLP-C was categorized using a cutoff value of 0.8 mmol/L to further investigate its association with clinical outcomes. The results showed a significant association between higher RLP-C levels and increased risk of MACE, all-cause mortality, and revascularization. These findings are presented in the supplementary materials ( Supplementary Tables S1 and S2 ). The study examined the nonlinear association between RLP-C and MACE using RCS . After adjusting for confounding factors, a significant nonlinear relationship was found (Nonlinear p < 0.001), with the risk of MACE increasing as RLP-C levels rose in general. 3.3. Subgroup analysis In all enrolled subgroups, no obvious interaction was observed .
Subjects and baseline characteristics The study included 2036 participants, and presented their baseline characteristics. Higher RLP-C tertiles were associated with younger age, higher diastolic blood pressure, heart rate, body mass index, comorbidities like diabetes and renal insufficiency, and less atrial fibrillation. Red blood cell count, FBG, triglyceride, albumin, creatinine, TC, LDL-C, uric acid, HbA1c, and RLP-C increased as RLP-C tertiles rose, while AST, eGFR, and HDL-C decreased. Patients with higher RLP-C tertiles received more loop diuretics, spironolactone, sacubitril/valsartan, and alpha-glucosidase inhibitors. Angiographic data revealed that higher RLP-C levels were associated with a higher prevalence of three-vessel disease and higher SYNTAX scores.
Associations between the RLP-C and outcomes The study found that the incidence rate of MACE was 33.9% . Higher RLP-C tertiles were associated with a significant increase in MACE, with Tertile 3 showing a higher rate compared to Tertile 1 (49.7% vs 22.3%, p < 0.001). Additionally, the rates of all-cause mortality and any revascularization significantly increased with higher RLP-C tertiles ( p < 0.001). Survival curves confirmed these associations , demonstrating a significant increase in principal and secondary outcomes as RLP-C tertiles increased (Log-rank, p < 0.05 for all). These findings are illustrated in . The associations between RLP-C and the outcomes were examined using Cox regression models . Higher tertiles of RLP-C showed an independent association with an increased risk of principal and secondary outcomes in an unadjusted model. When RLP-C was analyzed as a continuous variable, a similar correlation with increased risk was observed. After adjusting for age, sex, and additional confounding variables, the highest tertile of RLP-C remained significantly associated with the highest risk of MACE (Tertile 3 vs Tertile 1: HR, 95% CI: 2.57, 2.03-3.26; p < 0.001, P for trend <0.001), all-cause mortality(HR, 95% CI: 3.14, 2.20-4.47; p < 0.001, P for trend < 0.001), and any revascularization (HR, 95% CI: 2.27, 1.59-3.25; p < 0.001, P for trend < 0.001). The associations of RLP-C and MACE (HR, 95% CI: 1.50, 1.15-1.98; p = 0.003) or all-cause mortality (HR, 95% CI: 1.77, 1.22-2.57; p = 0.003) remained consistent when RLP-C was analyzed as a continuous variable. However, no significant correlation was found between RLP-C and non-fatal MI or any revascularization. In a secondary analysis, RLP-C was categorized using a cutoff value of 0.8 mmol/L to further investigate its association with clinical outcomes. The results showed a significant association between higher RLP-C levels and increased risk of MACE, all-cause mortality, and revascularization. These findings are presented in the supplementary materials ( Supplementary Tables S1 and S2 ). The study examined the nonlinear association between RLP-C and MACE using RCS . After adjusting for confounding factors, a significant nonlinear relationship was found (Nonlinear p < 0.001), with the risk of MACE increasing as RLP-C levels rose in general.
Subgroup analysis In all enrolled subgroups, no obvious interaction was observed .
Discussion The retrospective evaluation of RLP-C in IHF patients undergoing PCI highlighted its prognostic significance. Elevated RLP-C levels were associated with a higher incidence of MACE. An independent correlation between RLP-C and an elevated MACE risk has been confirmed. The nonlinear association between RLP-C and MACE was demonstrated by RCS analysis. Importantly, this association was consistent across various patient subgroups. Globally, HF remains the leading cause of death, with around two-thirds of HF cases attributed to IHD. The occurrence of ischemic events triggers maladaptive remodeling of cardiomyocytes, causing abnormal cavity dilation and systolic dysfunction due to increased extracellular matrix . Moreover, the enhanced survival rates following MI have led to an increased prevalence of HF, thereby expanding the population affected by IHF . Therefore, reliable biomarkers are needed for assessing prognosis in patients with IHF after PCI. IHD is well known to be associated with elevated levels of LDL-C, the risk of which can be lowered by reducing the levels of LDL-C . However, the residual risk of recurrent cardiovascular outcomes remains high for patients with IHD even after regulating LDL-C to recommended levels . Studies have shown that a significant portion of this residual risk is possibly caused by elevated triglyceride-rich lipoproteins (TGRL), which are composed of very low-density lipoproteins (VLDLs) and intermediate-density lipoproteins (IDLs) . In the fasting state, RLP-C represents the cholesterol content of TGRL . Increasing evidence indicates that RLP-C, but not triglyceride (TG), better reflects cholesterol levels in VLDL and also better represents TGRL-related cardiovascular risk . Consequently, researchers have increasingly focused on the mechanism by which RLP-C affects the cardiovascular system and its prognostic significance. Once RLP-C enters the arterial wall, mononuclear phagocytes engulf and internalize RLP-C and become foam cells . Then, extracellular lipids released by apoptotic macrophages accumulate in the central region of the plaque and form the necrotic core of the atherosclerosis lesion . Hence, akin to LDL-C, increased levels of RLP-C facilitate cholesterol accumulation in the arterial wall and the advancement of atherosclerosis, culminating in thrombosis and ischemic heart disease (IHD) . In a study by Fukushima H et al. elevated RLP-C levels were positively linked to an increased risk of CAD and an adverse prognosis among CAD patients . Similarly, in another prospective cohort study involving 292 stable CAD patients with LDL-C levels <70 mg/dL, RLP-C emerged as an independent predictor of cardiovascular events and played a pivotal role in the residual risk of forthcoming cardiovascular events . A study of 5414 IHD patients revealed a correlation between elevated RLP-C levels and heightened all-cause mortality . The research conducted by Nguyen SV et al. similarly concluded that RLP-C served as a valuable tool for assessing the risk of secondary cardiovascular events in patients with acute coronary syndrome . 4.1. Clinical implications and future directions This is the initial investigation to establish potential associations between RLP-C and clinical outcomes in IHF patients undergoing PCI. The current study demonstrated RLP-C as a robust prognostic indicator for adverse outcomes in IHF patients undergoing PCI. These results suggested that RLP-C held promise as a valuable tool in clinical practice, enhancing risk assessment and stratification beyond traditional risk factors, specifically in this specific population. In the RCS curve, the risk of MACE increased significantly with increasing RLP-C, which is consistent with previous studies. Nevertheless, a noteworthy trend was observed that as RLP-C levels increased, the curve exhibited a progressively less steep slope, potentially due to the limited amount of data available. Hence, paramount is the expansion of sample size among the high RLP-C subgroup in forthcoming investigations, thereby alleviating potential biases. The findings from this study suggested that RLP-C is not only a robust prognostic indicator for adverse outcomes in IHF patients undergoing PCI but also holds potential clinical implications that could influence current risk assessment and treatment strategies. Incorporating RLP-C levels into routine risk assessment could help identify patients who remain at high risk of MACE despite achieving LDL-C goals, thereby guiding more personalized and aggressive therapeutic interventions. For instance, in patients with elevated RLP-C, clinicians might consider intensifying lipid-lowering therapy or closely monitoring these patients for early signs of adverse cardiovascular events. Moreover, the integration of RLP-C into existing risk models could improve their predictive accuracy, particularly in populations where traditional lipid measures may not fully capture the residual cardiovascular risk. This approach aligns with the goals of precision medicine, where biomarkers like RLP-C could help tailor treatment plans based on an individual’s specific risk profile, potentially improving outcomes. Further prospective studies with larger and more diverse populations are needed to validate these findings and explore the utility of RLP-C in guiding clinical decision-making across different demographic groups. Additionally, examining the impact of RLP-C level modulation through therapeutic interventions could offer new insights into its role in managing cardiovascular risk in IHF patients. 4.2. Limitations There are several limitations to consider in this study. Firstly, as a single-center, retrospective observational study, causality cannot be established, and residual confounding remains a concern despite multivariate adjustments. Unmeasured confounders may still influence the results. Secondly, selection bias may have arisen from the inclusion and exclusion criteria, potentially limiting the generalizability of the findings. Additionally, relying on medical records could introduce information bias due to variability in data accuracy. Thirdly, the possibility of reverse causality cannot be entirely excluded, given the observational nature of the study. It is conceivable that the occurrence of MACE could influence lipid metabolism, including RLP-C levels. However, the temporal relationship established in this study suggests that elevated RLP-C preceded the adverse outcomes, supporting the hypothesis of RLP-C as a risk factor rather than a consequence. Finally, reliance on initial blood tests may have introduced random errors, and the lack of follow-up RLP-C measurements could miss important prognostic information. The study participants were exclusively Chinese, warranting further research to determine if these findings can be generalized to other ethnic groups. To validate the results, a larger, multi-center study involving diverse populations is necessary. Further studies should also explore the potential impact of time-varying exposures and outcomes, and prospective designs could help establish causality more robustly.
Clinical implications and future directions This is the initial investigation to establish potential associations between RLP-C and clinical outcomes in IHF patients undergoing PCI. The current study demonstrated RLP-C as a robust prognostic indicator for adverse outcomes in IHF patients undergoing PCI. These results suggested that RLP-C held promise as a valuable tool in clinical practice, enhancing risk assessment and stratification beyond traditional risk factors, specifically in this specific population. In the RCS curve, the risk of MACE increased significantly with increasing RLP-C, which is consistent with previous studies. Nevertheless, a noteworthy trend was observed that as RLP-C levels increased, the curve exhibited a progressively less steep slope, potentially due to the limited amount of data available. Hence, paramount is the expansion of sample size among the high RLP-C subgroup in forthcoming investigations, thereby alleviating potential biases. The findings from this study suggested that RLP-C is not only a robust prognostic indicator for adverse outcomes in IHF patients undergoing PCI but also holds potential clinical implications that could influence current risk assessment and treatment strategies. Incorporating RLP-C levels into routine risk assessment could help identify patients who remain at high risk of MACE despite achieving LDL-C goals, thereby guiding more personalized and aggressive therapeutic interventions. For instance, in patients with elevated RLP-C, clinicians might consider intensifying lipid-lowering therapy or closely monitoring these patients for early signs of adverse cardiovascular events. Moreover, the integration of RLP-C into existing risk models could improve their predictive accuracy, particularly in populations where traditional lipid measures may not fully capture the residual cardiovascular risk. This approach aligns with the goals of precision medicine, where biomarkers like RLP-C could help tailor treatment plans based on an individual’s specific risk profile, potentially improving outcomes. Further prospective studies with larger and more diverse populations are needed to validate these findings and explore the utility of RLP-C in guiding clinical decision-making across different demographic groups. Additionally, examining the impact of RLP-C level modulation through therapeutic interventions could offer new insights into its role in managing cardiovascular risk in IHF patients.
Limitations There are several limitations to consider in this study. Firstly, as a single-center, retrospective observational study, causality cannot be established, and residual confounding remains a concern despite multivariate adjustments. Unmeasured confounders may still influence the results. Secondly, selection bias may have arisen from the inclusion and exclusion criteria, potentially limiting the generalizability of the findings. Additionally, relying on medical records could introduce information bias due to variability in data accuracy. Thirdly, the possibility of reverse causality cannot be entirely excluded, given the observational nature of the study. It is conceivable that the occurrence of MACE could influence lipid metabolism, including RLP-C levels. However, the temporal relationship established in this study suggests that elevated RLP-C preceded the adverse outcomes, supporting the hypothesis of RLP-C as a risk factor rather than a consequence. Finally, reliance on initial blood tests may have introduced random errors, and the lack of follow-up RLP-C measurements could miss important prognostic information. The study participants were exclusively Chinese, warranting further research to determine if these findings can be generalized to other ethnic groups. To validate the results, a larger, multi-center study involving diverse populations is necessary. Further studies should also explore the potential impact of time-varying exposures and outcomes, and prospective designs could help establish causality more robustly.
Conclusions The study revealed that RLP-C, a readily measurable biomarker applicable in clinical practice, is strongly associated with an increased risk of MACE in IHF patients undergoing PCI. To validate these findings and explore the potential for incorporating RLP-C into routine risk assessment models, further prospective, randomized studies in larger and more diverse populations are needed. Additionally, investigating the impact of interventions targeting RLP-C levels on cardiovascular outcomes will be crucial for understanding its role in clinical practice.
Supplementary.docx
|
Mechanism and management of acute femoral artery occlusion caused by suture-mediated vascular closure device following neurointervention | c88cf758-81fc-476c-8e5b-a2ccbd9d16b3 | 11481141 | Suturing[mh] | The femoral artery is the most common access site for therapeutic neurointervention. Vascular closure devices (VCDs) have been widely used as an alternative to manual compression of the puncture site after transfemoral access to reduce the need for bed rest and reduce discomfort. A large observational study of 84 172 patients undergoing peripheral vascular interventions with sheaths ≤8 Fr showed that haemostasis with a VCD, including collagen plug devices and suture-mediated devices, was associated with a lower risk of puncture site-related complications compared with manual compression. A subanalysis of that study found that the incidence of puncture site-related complications was lower under VCD use, ranging from 1.1% to 2.3% for mild and only 0.1%–0.3% for severe. Puncture-related complications resulting from suture-mediated VCD have been previously identified as haematoma, pseudoaneurysm, arteriovenous fistula, retroperitoneal haemorrhage, thrombosis and infection. However, the occurrence of femoral artery stenosis or occlusion due to suture-mediated VCD has been rarely documented, so the mechanism of occurrence is not yet fully recognised, and standard management practices have yet to be established. We present a case of symptomatic femoral artery occlusion due to intimal dissection of the posterior wall of the vessel caused by suture-mediated VCD. Combined therapy with endovascular treatment to remove the thrombus from the occlusion site and subsequent surgical treatment to remove the causative suture successfully restored blood flow. In addition, we suggest methods of preventing the occurrence of this complication.
A patient in her 30s with no pre-existing medical conditions was referred to our institution with a newly diagnosed unruptured intracranial aneurysm. The maximum diameter of the aneurysm in the left internal carotid artery (ICA) was 7.6 mm ( ). Flow-diverter stenting with adjunctive coiling was undertaken to prevent rupture. The patient had received dual antiplatelet therapy (aspirin, 100 mg/day; prasugrel, 3.75 mg/day) for 14 days prior to the procedure. Endovascular treatment via transfemoral access was performed under general anaesthesia. Heparin was administered during the procedure to maintain the activated clotting time at ≥300 s. The right common femoral artery (CFA) was punctured under ultrasound guidance using the single-wall arterial puncture technique, and a 7-Fr long sheath (Terumo, Somerset, New Jersey, USA) was inserted. Right femoral angiography found no obvious stenosis or calcification from CFA to the external iliac artery (EIA) ( ). A 7-Fr balloon guide catheter (OPTIMO; Tokai Medical Products, Aichi, Japan) was then guided to the cervical segment of the left ICA. With one coil inserted into the aneurysm beforehand, a pipeline with shield technology (Medtronic, Dublin, Ireland) was placed from the supraclinoid segment to the cavernous segment of the left ICA with sufficient coverage of the aneurysm neck. Coil embolisation was completed with dome filling and a packing density of 16.6% without complications ( ). A suture-mediated VCD (Perclose ProStyle; Abbott-Vascular, Redwood, California, USA) was used for haemostasis at the puncture site. The entry angle of the 7-Fr long sheath was less than 45°, so the VCD exchanged with the sheath was also inserted at an angle less than 45°. When inserting the first VCD into the vessel for haemostasis, some resistance was encountered. As haemostasis with the first VCD failed, a second VCD was employed to stop the bleeding. The second VCD sheath had to be inserted far beyond the ‘Distal Guide’ before backflow of arterial blood from the ‘Marker Lumen’ was seen. Even with the VCD, arterial bleeding from the puncture site persisted, so manual compression was applied for 15 min to achieve haemostasis. After treatment, no neurological complications were observed. However, when walking was resumed 2 days after treatment, the patient complained of pain and numbness in the right lower extremity every time she walked approximately 10 m.
In addition to the intermittent claudication, the right CFA and dorsal pedal artery pulses were not palpable. The ankle-brachial index was 0.51 in the right lower extremity and 1.22 in the left lower extremity, with a prominent decrease on the right side. Three-dimensional CT angiography revealed occlusion from the right EIA to the right CFA.
Emergency vascular surgery was performed to recanalise the occluded vessels. After inserting a diagnostic catheter (Impress; Merit Medical, South Jordan, Utah, USA) via a cross-over approach from the left CFA to the right common iliac artery, right EIA angiography revealed that the right EIA to CFA was occluded proximal to the puncture point for the neurointervention ( ). The right CFA was then directly exposed, showing the sutures on the surface of the CFA ( ). Following transverse arteriotomy, an embolectomy catheter (LeMaitre Vascular, Burlington, Massachusetts, USA) was passed through the occlusion site and a large amount of dark-red thrombus was retrieved by pulling the inflated balloon of the embolectomy catheter ( ). However, blood flow in the right CFA did not resume despite thrombus removal. Additional longitudinal arteriotomy was applied just distal to the suture knot, and dissection of the posterior wall intima was noted ( ). The occlusion was duo to the suture between the dissected posterior wall intima and the anterior wall, severely narrowing the vessel lumen ( ). When the suture thread was cut, stenosis in the lumen was completely resolved. Patch angioplasty was performed using bovine pericardium (XenoSure; LeMaitre Vascular). Blood flow in the right CFA was completely restored ( ). The dorsal pedal artery pulse became palpable.
Following the resumption of blood flow in the occluded right CFA, the patient no longer exhibited intermittent claudication. The ankle-brachial index was 1.06 on the right lower extremity and 1.13 on the left lower extremity, with the right side improving to normal. Postoperative dual antiplatelet therapy was continued. The patient was discharged home 6 days after surgery without neurological sequelae. Ultrasound examination 3 months after treatment demonstrated artery patency from the CFA to the EIA, with no stenosis ( ). The patient has remained free of symptoms for 6 months as of the time of this report.
In the present case, the lumen of the CFA was severely stenosed by sutures between the posterior wall intima dissected by the suture-mediated VCD and the anterior wall. The CFA was then completely occluded by thrombus formation at the stenosed lumen due to manual compression for haemostasis. Endovascular treatment to remove the thrombus and release the occlusion and surgical removal of the restrictive suture completely released the lumen stenosis and allowed the resumption of normal blood flow in the right CFA. The frequency of vascular occlusion after suture-mediated VCD is low. In a previous study of 2177 patients who underwent closure with Perclose devices after percutaneous coronary intervention, arterial occlusion or loss of pulse after percutaneous coronary intervention occurred in only 0.1% of cases. The mechanisms underlying femoral artery stenosis or occlusion caused by suture-mediated VCD have not yet been fully recognised, and standard management strategies have not been established. Mechanism of femoral artery stenosis or occlusion by suture-mediated VCD Mechanisms of femoral artery stenosis or occlusion caused by suture-mediated VCD have been proposed in a few different scenarios. Gemmete et al speculated that stenosis with a right superficial femoral artery origin after puncture site haemostasis with suture-mediated VCD could be due to an inflammatory reaction or thrombus formation due to trauma to the endothelium caused by puncture or device deployment. Park et al experienced occlusion of the right femoral artery after haemostasis with suture-mediated VCD. By surgical exploration, they found an intimal dissection of the anterior wall where the suture had been removed, while the posterior wall was intact. Based on these findings, they attributed the occlusion to the anterior foot of the device catching the anterior wall of the proximal vessel and dissecting because the insertion angle of the device to the artery was less than 45°. Youn et al illustrated that when pulling the suture-mediated VCD posteriorly near the bifurcation, the posterior footplate may catch on the side branch and the suture may penetrate the carina at the bifurcation. Under such a scenario, the posterior wall of the CFA and carina may be pulled towards the anterior wall or the anterior wall may be pulled posteriorly and downward and then intussuscepted when tying the suture. In addition, Archie et al demonstrated in a tube model that the footplate of the Perclose Proglide, a suture-mediated VCD, easily engages the bifurcation point and allows the needle to deploy through the posterior wall, thereby leaving the suture knot in the artery. In the present case, resistance to insertion of the first VCD into the CFA was likely to have caused injury to the intima of the posterior wall of the vessel. A second VCD was then inserted at an angle less than 45° to the artery. When the VCD is inserted at less than 45° to the vessel axis, the footplate deploys more perpendicularly rather than parallel to the vessel axis, increasing the likelihood that the edge of the posterior foot of the footplate will conflict with the detached intima and snag on the injured intima of the posterior wall of the vessel. ( ). The posterior foot of the footplate in the device was retracted while hooked to the intima of the posterior wall of the vessel injured by the first VCD ( ). Further, if the device was inserted at less than 45°, the sheath would have to be advanced a longer distance beyond the ‘Distal Guide’ to check for backflow of arterial blood from the ‘Marker Lumen’. This in turn increases the distance the footplate is pulled after deployment and increases the likelihood of trapping and dissection of the posterior wall intima of the vessel by the footplate. Subsequently, the dissected posterior wall was sutured to the anterior wall, causing severe stenosis in the vessel lumen ( ). Finally, the gradual formation of a thrombus in the stenotic area led to occlusion. Management of femoral artery stenosis or occlusion caused by suture-mediated VCD Depending on the mechanism of occurrence, endovascular or surgical treatment can be applied to treat femoral artery stenosis or occlusion caused by suture-mediated VCD. Regarding endovascular treatment, Gemmete et al achieved improved blood flow by dilating the vessel with balloon angioplasty to address tight stenosis at the origin of the right superficial femoral artery associated with inflammatory reaction or thrombus formation caused by the suture-mediated VCD. Youn et al successfully reopened an occluded CFA in which the posterior foot was trapped within the side branch, causing dissection. In that case, the suture was cut by endovascular treatment with the rotational atherectomy device. However, the authors cautioned that the procedure raises concerns about distal embolisation of the suture material, vessel rupture and suture twisting around the atherectomy device. They noted that in limited cases, the procedure should be performed with caution after preparing a surgical backup in case of vascular rupture and placing a filter device to prevent embolism. On the other hand, in terms of surgical treatment, Park et al resolved an occlusion by direct surgical removal of the suture, causing occlusion by intussusception of the arterial vessel wall due to intimal dissection induced by the suture-mediated VCD. Based on the aforementioned findings, while vascular intervention may improve blood flow in cases where inflammation or thrombosis are the causative factors, suture-induced vascular stenosis or occlusion of the vessel can be difficult to address endovascularly and may require surgical removal of the suture, which is more invasive management. In the present case, even after endovascular treatment to remove the thrombus and release the occlusion, severe vascular stenosis remained due to the suture between the dissected posterior wall intima and the anterior wall, so direct surgical removal of the suture was necessary. Consequently, stenosis of the lumen completely disappeared, blood flow in the right CFA was fully restored, and symptoms in the right lower extremity were completely resolved. Even if the CFA becomes occluded, prompt release of the occlusion can prevent neurological sequelae. Stenosis or occlusion of the vessel by sutures can be difficult to address with endovascular therapy and may require surgical excision of the sutured material, representing more invasive management. Preventive measures to avoid femoral artery stenosis or occlusion by suture-mediated VCD Based on the above mechanisms and management approaches, the most important preventive measure to avoid femoral artery stenosis or occlusion due to suture-mediated VCD is to insert the device so that the insertion angle is not less than 45°. The suture-mediated VCD is designed so that the footplate is deployed parallel to the vessel axis when inserted at a 45° angle ( ). In that way, the risk of injuring the vascular intima can be minimised. On the other hand, device puncture angles less than 45° can cause hooking and dissection of the intima by the footplate, and subsequent suturing of the dissected intima can cause vascular stenosis or occlusion. To avoid stenosis or occlusion of the femoral artery caused by the suture-mediated VCD, surgeons should ensure that the entry angle for the device is not below 45°. If injury to the posterior wall intima of the CFA is suspected, it is essential to evaluate both the presence of intimal dissection and the extent of vascular stenosis using ultrasound imaging. Additionally, the pulsation of adjacent peripheral arteries, such as the popliteal artery and the dorsal pedal artery, needs to be confirmed. Surgical intervention is warranted in cases where dissection results in flow-limiting stenosis. Haemostasis should be achieved by manual compression rather than by the use of suture-mediated VCDs in cases of injury to the posterior wall intima to minimise the potential risk of further injury to the intima. Learning points Suture-mediated vascular closure device (VCD) insertion at less than 45° to the femoral artery may cause the posterior foot of the device to retract while hooked to the intima of the posterior artery wall and contribute to dissection. Use of a suture-mediated VCD with dissected posterior wall intima may cause stenosis or occlusion of the vessel lumen due to the suture between the dissected posterior wall intima and anterior wall. Stenosis or occlusion of vessels associated with sutures between the dissected posterior wall intima and anterior wall can be completely restored by surgical removal of the sutures. The surgeon should ensure that the entry angle for the device should not be below 45° to avoid stenosis or occlusion of the femoral artery caused by suture-mediated VCD.
Mechanisms of femoral artery stenosis or occlusion caused by suture-mediated VCD have been proposed in a few different scenarios. Gemmete et al speculated that stenosis with a right superficial femoral artery origin after puncture site haemostasis with suture-mediated VCD could be due to an inflammatory reaction or thrombus formation due to trauma to the endothelium caused by puncture or device deployment. Park et al experienced occlusion of the right femoral artery after haemostasis with suture-mediated VCD. By surgical exploration, they found an intimal dissection of the anterior wall where the suture had been removed, while the posterior wall was intact. Based on these findings, they attributed the occlusion to the anterior foot of the device catching the anterior wall of the proximal vessel and dissecting because the insertion angle of the device to the artery was less than 45°. Youn et al illustrated that when pulling the suture-mediated VCD posteriorly near the bifurcation, the posterior footplate may catch on the side branch and the suture may penetrate the carina at the bifurcation. Under such a scenario, the posterior wall of the CFA and carina may be pulled towards the anterior wall or the anterior wall may be pulled posteriorly and downward and then intussuscepted when tying the suture. In addition, Archie et al demonstrated in a tube model that the footplate of the Perclose Proglide, a suture-mediated VCD, easily engages the bifurcation point and allows the needle to deploy through the posterior wall, thereby leaving the suture knot in the artery. In the present case, resistance to insertion of the first VCD into the CFA was likely to have caused injury to the intima of the posterior wall of the vessel. A second VCD was then inserted at an angle less than 45° to the artery. When the VCD is inserted at less than 45° to the vessel axis, the footplate deploys more perpendicularly rather than parallel to the vessel axis, increasing the likelihood that the edge of the posterior foot of the footplate will conflict with the detached intima and snag on the injured intima of the posterior wall of the vessel. ( ). The posterior foot of the footplate in the device was retracted while hooked to the intima of the posterior wall of the vessel injured by the first VCD ( ). Further, if the device was inserted at less than 45°, the sheath would have to be advanced a longer distance beyond the ‘Distal Guide’ to check for backflow of arterial blood from the ‘Marker Lumen’. This in turn increases the distance the footplate is pulled after deployment and increases the likelihood of trapping and dissection of the posterior wall intima of the vessel by the footplate. Subsequently, the dissected posterior wall was sutured to the anterior wall, causing severe stenosis in the vessel lumen ( ). Finally, the gradual formation of a thrombus in the stenotic area led to occlusion.
Depending on the mechanism of occurrence, endovascular or surgical treatment can be applied to treat femoral artery stenosis or occlusion caused by suture-mediated VCD. Regarding endovascular treatment, Gemmete et al achieved improved blood flow by dilating the vessel with balloon angioplasty to address tight stenosis at the origin of the right superficial femoral artery associated with inflammatory reaction or thrombus formation caused by the suture-mediated VCD. Youn et al successfully reopened an occluded CFA in which the posterior foot was trapped within the side branch, causing dissection. In that case, the suture was cut by endovascular treatment with the rotational atherectomy device. However, the authors cautioned that the procedure raises concerns about distal embolisation of the suture material, vessel rupture and suture twisting around the atherectomy device. They noted that in limited cases, the procedure should be performed with caution after preparing a surgical backup in case of vascular rupture and placing a filter device to prevent embolism. On the other hand, in terms of surgical treatment, Park et al resolved an occlusion by direct surgical removal of the suture, causing occlusion by intussusception of the arterial vessel wall due to intimal dissection induced by the suture-mediated VCD. Based on the aforementioned findings, while vascular intervention may improve blood flow in cases where inflammation or thrombosis are the causative factors, suture-induced vascular stenosis or occlusion of the vessel can be difficult to address endovascularly and may require surgical removal of the suture, which is more invasive management. In the present case, even after endovascular treatment to remove the thrombus and release the occlusion, severe vascular stenosis remained due to the suture between the dissected posterior wall intima and the anterior wall, so direct surgical removal of the suture was necessary. Consequently, stenosis of the lumen completely disappeared, blood flow in the right CFA was fully restored, and symptoms in the right lower extremity were completely resolved. Even if the CFA becomes occluded, prompt release of the occlusion can prevent neurological sequelae. Stenosis or occlusion of the vessel by sutures can be difficult to address with endovascular therapy and may require surgical excision of the sutured material, representing more invasive management.
Based on the above mechanisms and management approaches, the most important preventive measure to avoid femoral artery stenosis or occlusion due to suture-mediated VCD is to insert the device so that the insertion angle is not less than 45°. The suture-mediated VCD is designed so that the footplate is deployed parallel to the vessel axis when inserted at a 45° angle ( ). In that way, the risk of injuring the vascular intima can be minimised. On the other hand, device puncture angles less than 45° can cause hooking and dissection of the intima by the footplate, and subsequent suturing of the dissected intima can cause vascular stenosis or occlusion. To avoid stenosis or occlusion of the femoral artery caused by the suture-mediated VCD, surgeons should ensure that the entry angle for the device is not below 45°. If injury to the posterior wall intima of the CFA is suspected, it is essential to evaluate both the presence of intimal dissection and the extent of vascular stenosis using ultrasound imaging. Additionally, the pulsation of adjacent peripheral arteries, such as the popliteal artery and the dorsal pedal artery, needs to be confirmed. Surgical intervention is warranted in cases where dissection results in flow-limiting stenosis. Haemostasis should be achieved by manual compression rather than by the use of suture-mediated VCDs in cases of injury to the posterior wall intima to minimise the potential risk of further injury to the intima. Learning points Suture-mediated vascular closure device (VCD) insertion at less than 45° to the femoral artery may cause the posterior foot of the device to retract while hooked to the intima of the posterior artery wall and contribute to dissection. Use of a suture-mediated VCD with dissected posterior wall intima may cause stenosis or occlusion of the vessel lumen due to the suture between the dissected posterior wall intima and anterior wall. Stenosis or occlusion of vessels associated with sutures between the dissected posterior wall intima and anterior wall can be completely restored by surgical removal of the sutures. The surgeon should ensure that the entry angle for the device should not be below 45° to avoid stenosis or occlusion of the femoral artery caused by suture-mediated VCD.
|
Comparative effectiveness of robot‐assisted radical cystectomy with intracorporeal urinary diversion vs open radical cystectomy for bladder cancer | 615b012a-c225-49a8-8510-095b9cde65d9 | 11842888 | Robotic Surgical Procedures[mh] | Bladder cancer (BC) is the 10th most common malignancy worldwide with ~600 000 new cases estimated in 2020 . For patients with localised muscle‐invasive BC (MIBC) or recurrent high‐risk non‐muscle‐invasive BC (NMIBC), radical cystectomy (RC) with concomitant pelvic lymph node dissection (PLND) remains the standard of care , providing 5‐ and 10‐year overall survival (OS) as high as 66% and 43%, respectively . With regards to technical aspects, RC has traditionally been performed using an open approach (ORC) that carries significant morbidity with >30% of patients experiencing at least one severe postoperative complication within 30 days after surgery when applying rigorous assessment . To improve perioperative outcomes of this procedure, minimal invasive approaches including pure laparoscopic and robot‐assisted RC (RARC) have been developed over the past decades. However, only RARC can be considered as a major breakthrough that is widely used worldwide , given the unfavourable ergonomics associated with the use of pure laparoscopy for bladder removal and more importantly, urinary diversion (UD) creation. Interestingly, multiple historical retrospective studies , randomised controlled trials (RCTs) [ , , , , , , , , ] along with a meta‐analysis of Level‐I evidence have generally confirmed that RARC provides better perioperative outcomes, mainly including lower blood loss and shorter length of stay (LoS), with similar oncological outcomes as compared to ORC. However, it is noteworthy that these reports only considered patients who underwent extracorporeal UD (ECUD) in the RARC group. Given that UD is likely to contribute more than RC itself to the morbidity of the procedure, RARC with intracorporeal UD (ICUD) has been proposed to improve even more perioperative outcomes associated with ECUD. In fact, this has been confirmed by the largest RCT showing a benefit of >2 days alive and out of the hospital within 90 days of RARC with ICUD vs ORC , while other smaller RCTs mostly showed perioperative benefits similar to those reported in RCTS comparing RARC with ECUD vs ORC [ , , , ]. However, real‐life comparative evidence provided more conflicting results leading to uncertainty in the effectiveness of RARC with ICUD vs ORC . Against this backdrop, we aimed to conduct a single‐institution retrospective study comparing the perioperative, oncological and stricture outcomes of RARC with ICUD vs ORC for BC.
Study Design and Data Collection All consecutive patients who underwent RC for either MIBC or recurrent high‐risk NMIBC at our institution from 2014 to 2023 were retrospectively included in this study. After Institutional Review Board approval (2235364), a chart review was conducted to extract patient (age, gender, body mass index, European Cooperative Oncology Group Performance Status [ECOG‐PS] and smoking history), tumour (cTN stage, histology, pTN stage, tumour grade, concomitant carcinoma in situ ), and treatment (neoadjuvant chemotherapy [NAC], RC approach, UD type, adjuvant chemotherapy, adjuvant immunotherapy, and adjuvant radiotherapy) characteristics as well as perioperative, oncological and stricture outcomes. Surgical Procedures There was no patient‐ or tumour‐related criteria for eligibility to RARC, which was left at the surgeon discretion. The same oncological principles applied to RARC and ORC that was systematically associated with bilateral extensive PLND and ureteric frozen section analysis while only selected cases underwent urethral frozen section analysis. All RARCs were performed via a transperitoneal approach using either the Da Vinci® Si or Xi robotic platform (Intuitive Surgical Inc., Sunnyvale, CA, USA) in a four‐arm configuration. The choice of UD type (ileal conduit, neobladder or cutaneous ureterostomy) after RARC or ORC was based on patient and tumour characteristics in accordance with the latest European Association of Urology guidelines . Only ICUD was used in the RARC group. Outcomes Perioperative outcomes included operative time, intraoperative complications, blood loss, perioperative blood transfusion, 90‐day overall and major (defined as Clavien–Dindo Grade ≥III) complications, as well as initial LoS, 90‐day re‐hospitalisation and number of days alive and out of the hospital within 90 days of surgery. Oncological outcomes included pathological (lymph node [LN] count and surgical margins) and survival (recurrence‐free survival [RFS] defined as the time from RC to relapse or death, cancer‐specific survival [CSS] defined as the time from RC to death from BC, OS defined as the time from RC to death from any cause) outcomes. Stricture outcomes included uretero‐ileal stricture‐free survival (SFS) only in patients treated with RARC and concomitant ileal conduit or neobladder. Statistical Analyses First, we described trends in the use of RARC and ORC in our department over time that were tested using the P test for trend. Second, continuous variables were reported using median and interquartile range (IQR) while frequency and proportion were used for categorical variables. Bivariate analyses were performed using the Fisher's exact test and continuous variables were examined using a Mann–Whitney U test. Third, univariable logistic regression analyses were conducted to identify covariates associated with risks of major blood loss (defined as >median of the cohort), perioperative blood transfusion, 90‐day major complications and prolonged initial LoS (defined as >median of the cohort), as well as those associated with more days alive and out of the hospital within 90 days of surgery (defined as >median of the cohort) and higher LN count (defined as >median of the cohort) by calculating the corresponding odds ratios (ORs) and their 95%CIs. Only significant covariates were included in multivariable logistic regression models to identify the independent predictors of the aforementioned perioperative and pathological outcomes. Fourth, the Kaplan Meier method with the log‐rank test were used to compare RFS, CSS, OS and uretero‐ileal SFS between the RARC and ORC groups. Univariable Cox regression analyses were conducted to identify covariates associated with RFS, CSS, OS and uretero‐ileal SFS by calculating the corresponding hazard ratios (HRs) and their 95% CIs. Only significant covariates were included in multivariable Cox regression models to identify the independent predictors of RFS, CSS, OS and uretero‐ileal SFS. All statistical analyses were performed using R software (R Foundation for Statistical Computing, Vienna, Austria). Two‐sided statistical significance was defined as P < 0.05.
All consecutive patients who underwent RC for either MIBC or recurrent high‐risk NMIBC at our institution from 2014 to 2023 were retrospectively included in this study. After Institutional Review Board approval (2235364), a chart review was conducted to extract patient (age, gender, body mass index, European Cooperative Oncology Group Performance Status [ECOG‐PS] and smoking history), tumour (cTN stage, histology, pTN stage, tumour grade, concomitant carcinoma in situ ), and treatment (neoadjuvant chemotherapy [NAC], RC approach, UD type, adjuvant chemotherapy, adjuvant immunotherapy, and adjuvant radiotherapy) characteristics as well as perioperative, oncological and stricture outcomes.
There was no patient‐ or tumour‐related criteria for eligibility to RARC, which was left at the surgeon discretion. The same oncological principles applied to RARC and ORC that was systematically associated with bilateral extensive PLND and ureteric frozen section analysis while only selected cases underwent urethral frozen section analysis. All RARCs were performed via a transperitoneal approach using either the Da Vinci® Si or Xi robotic platform (Intuitive Surgical Inc., Sunnyvale, CA, USA) in a four‐arm configuration. The choice of UD type (ileal conduit, neobladder or cutaneous ureterostomy) after RARC or ORC was based on patient and tumour characteristics in accordance with the latest European Association of Urology guidelines . Only ICUD was used in the RARC group.
Perioperative outcomes included operative time, intraoperative complications, blood loss, perioperative blood transfusion, 90‐day overall and major (defined as Clavien–Dindo Grade ≥III) complications, as well as initial LoS, 90‐day re‐hospitalisation and number of days alive and out of the hospital within 90 days of surgery. Oncological outcomes included pathological (lymph node [LN] count and surgical margins) and survival (recurrence‐free survival [RFS] defined as the time from RC to relapse or death, cancer‐specific survival [CSS] defined as the time from RC to death from BC, OS defined as the time from RC to death from any cause) outcomes. Stricture outcomes included uretero‐ileal stricture‐free survival (SFS) only in patients treated with RARC and concomitant ileal conduit or neobladder.
First, we described trends in the use of RARC and ORC in our department over time that were tested using the P test for trend. Second, continuous variables were reported using median and interquartile range (IQR) while frequency and proportion were used for categorical variables. Bivariate analyses were performed using the Fisher's exact test and continuous variables were examined using a Mann–Whitney U test. Third, univariable logistic regression analyses were conducted to identify covariates associated with risks of major blood loss (defined as >median of the cohort), perioperative blood transfusion, 90‐day major complications and prolonged initial LoS (defined as >median of the cohort), as well as those associated with more days alive and out of the hospital within 90 days of surgery (defined as >median of the cohort) and higher LN count (defined as >median of the cohort) by calculating the corresponding odds ratios (ORs) and their 95%CIs. Only significant covariates were included in multivariable logistic regression models to identify the independent predictors of the aforementioned perioperative and pathological outcomes. Fourth, the Kaplan Meier method with the log‐rank test were used to compare RFS, CSS, OS and uretero‐ileal SFS between the RARC and ORC groups. Univariable Cox regression analyses were conducted to identify covariates associated with RFS, CSS, OS and uretero‐ileal SFS by calculating the corresponding hazard ratios (HRs) and their 95% CIs. Only significant covariates were included in multivariable Cox regression models to identify the independent predictors of RFS, CSS, OS and uretero‐ileal SFS. All statistical analyses were performed using R software (R Foundation for Statistical Computing, Vienna, Austria). Two‐sided statistical significance was defined as P < 0.05.
Baseline Characteristics Overall, 316 patients underwent RARC with ICUD ( n = 228 [72.2%]) or ORC ( n = 88 [27.8%]) at our institution between 2014 and 2023. The use of RARC significantly increased from 7.7% ( n = 2) to 100% ( n = 36), while that of ORC significantly decreased from 92.3% ( n = 24) to 0% ( n = 0) over the study period ( P < 0.001; Fig. ). Patients treated with RARC were more likely to have an ECOG‐PS = 0 (48.2% vs 15.9%; P < 0.001) without any smoking history (25.9% vs 14.8%; P = 0.035) and to receive NAC (41.2% vs 27.3%; P = 0.02) and/or neobladder (38.6% vs 21.6%; P < 0.001) for pN0 disease (76.8% vs 69.3%; P = 0.01) as compared to those treated with ORC. In addition, they were less likely to receive adjuvant radiotherapy (1.8% vs 6.8%; P = 0.03). Other baseline characteristics did not significantly differ between RARC and ORC groups (Table ). Perioperative Outcomes First, the use of RARC was associated with similar operative time (median [IQR] 300 [255–350] vs 327 [255–386] min; P = 0.1) and risk of intraoperative complications (1.8% vs 4.5%; P = 0.2) but decreased estimated blood loss (median [IQR] 300 [200–500] vs 700 [575–1000] mL; P < 0.001) and risk of perioperative blood transfusion (11.8% vs 36.4%; P < 0.001) as compared to ORC (Table ). Univariable logistic regression analysis showed that the use of RARC vs ORC was the only predictor of decreased risk of major blood loss (OR 0.10, 95% CI 0.04–0.23; P < 0.001; Table ). In addition, multivariable logistic regression analysis showed that the use of RARC vs ORC was an independent predictor of decreased risk of perioperative blood transfusion (OR 0.30, 95% CI 0.16–0.57; P < 0.001; Table ). Second, the use of RARC was associated with similar risk of 90‐day overall complications (55.3% vs 63.6%; P = 0.2) but decreased risk of 90‐day major complications (18.9% vs 34.1%; P = 0.016) as compared to ORC. Multivariable logistic regression analyses showed that the use of RARC vs ORC was an independent predictor of decreased risk of 90‐day major complications (OR 0.56, 95% CI 0.29–0.99; P = 0.04; Table ). Finally, the use of RARC was associated with similar 90‐day re‐hospitalisation rate (33.7% vs 32.9%; P = 0.9) but shorter initial LoS (median [IQR] 14 [9–16] vs 15 [13–20] days; P = 0.02) and higher number of days alive and out of the hospital within 90 days of surgery (median [IQR] 75 [69–78] vs 72 [67–76] days; P = 0.018) as compared to ORC. Multivariable logistic regression analyses showed that the use of RARC vs ORC was an independent predictor of decreased risk of prolonged initial LoS (OR 0.20, 95% CI 0.09–0.35; P < 0.001; Table ) and more days alive and out of the hospital within 90 days of surgery (OR 2.56, 95% CI 1.46–4.60; P < 0.01; Table ). Oncological Outcomes With regards to pathological outcomes (Table ), the use of RARC was associated with higher LN count (median [IQR] 16 [11–20] vs 9 [3–15] LNs; P < 0.001) and similar risk of positive surgical margins (10.1% vs 11.4%; P = 0.7) regardless of their location ( P = 0.8) as compared to ORC. Multivariable logistic regression analyses showed that the use of RARC vs ORC was an independent predictor of higher LN count (OR 3.35, 95% CI 1.83–6.30; P < 0.001; Table ). With regards to survival outcomes, 115 (36.4%) patients had disease recurrence while 109 (34.5%) died including 77 (24.4%) from BC after a median (IQR) follow‐up of 42.3 (16.4–73.8) months. Kaplan–Meier curves showed that the use of RARC vs ORC was associated with similar 5‐year RFS rate (57.8% [95% CI 50.3–66.4%] vs 43.6% [95% CI 33.3–57.1%]; P = 0.06; Fig. ) but significantly higher 5‐year CSS rate (71.1% [95% CI 63.4–79.7%] vs 53.1% [95% CI 42.2–66.8%]; P = 0.02; Fig. ) and 5‐year OS rate (62.4% [95% CI 54.3–71.6%] vs 43.7% [95% CI 33.5–57.0%]; P = 0.01; Fig. ). Univariable Cox regression analysis confirmed that the use of RARC vs ORC was not significantly associated with RFS (HR 0.72, 95% CI 0.49–1.07; P = 0.1; Table ). In addition, multivariable Cox regression analyses showed that the use of RARC vs ORC was not significantly associated with CSS (HR 0.69, 95% CI 0.43–1.10; P = 0.1; Table ) and OS (HR 0.76, 95% CI 0.47–1.20; P = 0.3; Table ). Stricture Outcomes Among 301 patients who underwent RC with ileal conduit or neobladder, there was no significant difference in the overall rate of uretero‐ileal strictures (17.5% vs 14.8%; P = 0.6; Table ) between the RARC and ORC group. In addition, the 5‐year uretero‐ileal SFS rate did not significantly differ between both groups (72.8% [95% CI 64.8–81.9%] vs 81.3% [95% CI 72.0–91.7%]; P = 0.7; Fig. ) and univariable Cox regression analysis confirmed that the use of RARC vs ORC was not significantly associated with uretero‐ileal SFS (HR 1.18, 95% CI 0.62–2.25; P = 0.6; Table ).
Overall, 316 patients underwent RARC with ICUD ( n = 228 [72.2%]) or ORC ( n = 88 [27.8%]) at our institution between 2014 and 2023. The use of RARC significantly increased from 7.7% ( n = 2) to 100% ( n = 36), while that of ORC significantly decreased from 92.3% ( n = 24) to 0% ( n = 0) over the study period ( P < 0.001; Fig. ). Patients treated with RARC were more likely to have an ECOG‐PS = 0 (48.2% vs 15.9%; P < 0.001) without any smoking history (25.9% vs 14.8%; P = 0.035) and to receive NAC (41.2% vs 27.3%; P = 0.02) and/or neobladder (38.6% vs 21.6%; P < 0.001) for pN0 disease (76.8% vs 69.3%; P = 0.01) as compared to those treated with ORC. In addition, they were less likely to receive adjuvant radiotherapy (1.8% vs 6.8%; P = 0.03). Other baseline characteristics did not significantly differ between RARC and ORC groups (Table ).
First, the use of RARC was associated with similar operative time (median [IQR] 300 [255–350] vs 327 [255–386] min; P = 0.1) and risk of intraoperative complications (1.8% vs 4.5%; P = 0.2) but decreased estimated blood loss (median [IQR] 300 [200–500] vs 700 [575–1000] mL; P < 0.001) and risk of perioperative blood transfusion (11.8% vs 36.4%; P < 0.001) as compared to ORC (Table ). Univariable logistic regression analysis showed that the use of RARC vs ORC was the only predictor of decreased risk of major blood loss (OR 0.10, 95% CI 0.04–0.23; P < 0.001; Table ). In addition, multivariable logistic regression analysis showed that the use of RARC vs ORC was an independent predictor of decreased risk of perioperative blood transfusion (OR 0.30, 95% CI 0.16–0.57; P < 0.001; Table ). Second, the use of RARC was associated with similar risk of 90‐day overall complications (55.3% vs 63.6%; P = 0.2) but decreased risk of 90‐day major complications (18.9% vs 34.1%; P = 0.016) as compared to ORC. Multivariable logistic regression analyses showed that the use of RARC vs ORC was an independent predictor of decreased risk of 90‐day major complications (OR 0.56, 95% CI 0.29–0.99; P = 0.04; Table ). Finally, the use of RARC was associated with similar 90‐day re‐hospitalisation rate (33.7% vs 32.9%; P = 0.9) but shorter initial LoS (median [IQR] 14 [9–16] vs 15 [13–20] days; P = 0.02) and higher number of days alive and out of the hospital within 90 days of surgery (median [IQR] 75 [69–78] vs 72 [67–76] days; P = 0.018) as compared to ORC. Multivariable logistic regression analyses showed that the use of RARC vs ORC was an independent predictor of decreased risk of prolonged initial LoS (OR 0.20, 95% CI 0.09–0.35; P < 0.001; Table ) and more days alive and out of the hospital within 90 days of surgery (OR 2.56, 95% CI 1.46–4.60; P < 0.01; Table ).
With regards to pathological outcomes (Table ), the use of RARC was associated with higher LN count (median [IQR] 16 [11–20] vs 9 [3–15] LNs; P < 0.001) and similar risk of positive surgical margins (10.1% vs 11.4%; P = 0.7) regardless of their location ( P = 0.8) as compared to ORC. Multivariable logistic regression analyses showed that the use of RARC vs ORC was an independent predictor of higher LN count (OR 3.35, 95% CI 1.83–6.30; P < 0.001; Table ). With regards to survival outcomes, 115 (36.4%) patients had disease recurrence while 109 (34.5%) died including 77 (24.4%) from BC after a median (IQR) follow‐up of 42.3 (16.4–73.8) months. Kaplan–Meier curves showed that the use of RARC vs ORC was associated with similar 5‐year RFS rate (57.8% [95% CI 50.3–66.4%] vs 43.6% [95% CI 33.3–57.1%]; P = 0.06; Fig. ) but significantly higher 5‐year CSS rate (71.1% [95% CI 63.4–79.7%] vs 53.1% [95% CI 42.2–66.8%]; P = 0.02; Fig. ) and 5‐year OS rate (62.4% [95% CI 54.3–71.6%] vs 43.7% [95% CI 33.5–57.0%]; P = 0.01; Fig. ). Univariable Cox regression analysis confirmed that the use of RARC vs ORC was not significantly associated with RFS (HR 0.72, 95% CI 0.49–1.07; P = 0.1; Table ). In addition, multivariable Cox regression analyses showed that the use of RARC vs ORC was not significantly associated with CSS (HR 0.69, 95% CI 0.43–1.10; P = 0.1; Table ) and OS (HR 0.76, 95% CI 0.47–1.20; P = 0.3; Table ).
Among 301 patients who underwent RC with ileal conduit or neobladder, there was no significant difference in the overall rate of uretero‐ileal strictures (17.5% vs 14.8%; P = 0.6; Table ) between the RARC and ORC group. In addition, the 5‐year uretero‐ileal SFS rate did not significantly differ between both groups (72.8% [95% CI 64.8–81.9%] vs 81.3% [95% CI 72.0–91.7%]; P = 0.7; Fig. ) and univariable Cox regression analysis confirmed that the use of RARC vs ORC was not significantly associated with uretero‐ileal SFS (HR 1.18, 95% CI 0.62–2.25; P = 0.6; Table ).
Recent prospective [ , , , , ] and retrospective evidence suggests that the use of RARC with ICUD vs ORC could provide similar oncological outcomes with several perioperative benefits, even greater than those observed after RARC with ECUD. Based on our monocentric real‐life experience, we confirmed these findings by comparing the effectiveness of RARC with ICUD vs ORC for BC. With regards to perioperative outcomes, we found that RARC with ICUD was associated with similar operative time and risk of intraoperative complications, as well as decreased blood loss and risk of blood transfusion as compared to ORC. Although most of prospective [ , , , ] and retrospective [ , , ] evidence showed that RARC with ICUD was associated with longer operative time than ORC, Mortezavi et al. also observed similar operative time between both groups in a large population‐based cohort study. In addition, the RCT by Maibom et al. confirmed similar risk of intraoperative complications during RARC with ICUD and ORC, while most of the prospective [ , , , ] and retrospective evidence [ , , , ] is in line with our findings regarding decreased blood loss and transfusion after RARC with ICUD vs ORC. In addition, although the risk of 90‐day overall complications was similar between RARC and ORC groups, we found that the use of RARC was associated with decreased risk of 90‐day major complications even after adjusting for confounding. Although the latest finding contrasts with all available RCTs [ , , , , ], several retrospective comparative studies also reported a decreased risk of 90‐day major complications after RARC with ICUD ranging from 16.9% to 17.2% . This has likely contributed to the shorter initial LoS after RARC with ICUD observed in our study as well as in multiple other retrospective [ , , , ] reports available in the literature and to a lesser extent, in several RCTs [ , , ]. More importantly, we also observed a benefit of 3 days alive and out of the hospital within 90 days of RARC with ICUD vs ORC, in line with the largest RCT to date but never confirmed in the real‐life setting to the best of our knowledge. With regards to oncological outcomes, we found that the use of RARC did not increase the risk of positive surgical margins, as supported by prospective [ , , , ] and retrospective evidence [ , , ], while yielding a higher LN count. Similarly, several RCTs showed a numerically higher LN count after RARC vs ORC [ , , , ] and many other retrospective studies reported a significant benefit of 4.5–6 LNs favouring the RARC group [ , , ]. In addition, the absence of any survival difference between RARC and ORC observed in our study aligns with previous RCTs . Finally, with regards to stricture outcomes, although the risk of uretero‐ileal stricture was numerically slightly lower in the ORC group, multivariable Cox regression analysis showed that the use of RARC with ICUD was not an independent predictor of uretero‐ileal SFS. Similarly, although a recent systematic review of the literature suggested a numerically slightly higher risk of uretero‐ileal stricture after RARC with ICUD (15%) vs ECUD (12.4%) or ORC (9.6%), none of the individual reports included in that study found a significant difference between these three approaches . This suggests that other patient, disease and/or treatment characteristics could be at play, including the UD type per our results. In addition, surgeon experience is likely to participate to the risk of uretero‐ileal stricture, as previously reported by Ericson et al. who showed a 17.5% rate at initial experience that dropped to 4.9% after having performed 75 cases. It is noteworthy that our report is not devoid of limitations. First, the present findings need to be interpreted within the limitations of the observational study design. The analyses are subject to selection bias, which we attempted to mitigate by using multivariable logistic and Cox regression analyses. However, residual confounding could still have impacted our results, although most of relevant covariates were available in our database. In addition, the monocentric study design with a small sample size and 10‐year inclusion period could limit the generalisability of our findings observed at a high‐volume tertiary care centre to vastly different settings. Finally, no enhanced recovery protocol is currently used in our department and thus, we were not able to evaluate its impact on the perioperative outcomes of patients undergoing RC.
Our real‐world study supports the effectiveness of RARC with ICUD vs ORC for BC. We generally observed better perioperative outcomes after RARC with ICUD, notably including decreased risk of 90‐day major complication and more days alive and out of the hospital within 90 days of surgery. In addition, the use of RARC with ICUD was associated with similar oncological—except for higher LN count—and stricture outcomes as compared to ORC. Thus, all these data taken together with currently available prospective evidence suggest that RARC with ICUD is likely to become the standard of care for localised MIBC or recurrent high‐risk NMIBC.
The authors have no disclosures.
Figure S1. Trends in the use (A/ Proportion; B/ Frequency) of RARC and ORC cystectoy for BC at Pitié‐Salpêtrière Hospital between 2014 and 2023.
|
Synthesis and Transformations of NH‐Sulfoximines | 343ec929-c2a5-424a-9e27-e774271403ae | 9291533 | Pharmacology[mh] | Introduction Sulfoximines, the mono‐aza analogues of sulfones, have attracted the interest of numerous research groups worldwide, as witnessed by the large number of publications appeared in the last decade. Since the first discovery of the irreversible glutamine synthetase inhibitor L‐methionine‐(S)‐sulfoximine (MSO), the number of bioactive molecules including the sulfoximine moiety in their structure increased dramatically. Soon after, buthionine sulfoximine (BSO), a gamma‐glutamylcysteine synthetase inhibitor, was found suitable for treating tumors in which GSH is overexpressed, and as adjuvant in chemotherapy. A wide range of sulfoximines have been assessed as bioactive agents and some entered clinical trials, as in the case of the kinase inhibitors roniciclib, BAY 1143572, and AZD 6738, for the treatment of cancer (Scheme ). Very recently a new sulfoximine forming compound was reported to treat herpes infections. From a structural point of view, sulfoximines feature a tetrahedral sulfur atom, and a basic nitrogen atom able to coordinate metal ions and form salts with mineral acids. The stereogenicity of the sulfur center provides configurationally stable and hence optically active sulfoximine stereoisomers. The sulfoximine moiety can introduce favorable pharmacokinetic properties to molecular scaffolds such as better solubility in protic solvents, hydrogen‐bond acceptor/donor capability, and chemical and metabolic stability in comparison to related sulfone or sulfonamide structures. These physicochemical properties can be additionally tuned by N‐functionalization reactions. In addition to the great interest in the chemistry of sulfoximines in drug discovery programs, this S(VI) functionality finds use in modern synthesis as chiral auxiliaries or ligands for asymmetric catalysis. In addition, possible degradation and conversion pathways for sulfoximines have been investigated, in order to assess the potential risk of sulfoximine metabolites for crop protection and medicinal chemistry applications. The renewed interest in the chemistry of sulfoximines, is showcased by the invention of new synthetic strategies for their preparation and functionalization. This review aims to provide an up‐to‐date overview of the recently introduced synthetic strategies for accessing NH ‐sulfoximines, and to also cover their functionalization. The field continues to expand rapidly, and the review will concentrate on recent advances from the last decade, and particularly since a major review by Bolm. We will first focus on methods that directly form NH‐sulfoximines (rather than via an intermediate protected form). We also cover those applications in continuous flow. Then we review methods for the functionalization of these NH derivatives, separated by the nature of the N‐functional group. Finally, we cover cyclisation reactions for the formation of non‐planar heterocycles containing the S(O)=N functionality. Together, we expect this will provide a valuable reference for the synthetic and medicinal chemistry communities for the preparation of these valuable motifs and their derivatives.
Synthesis of NH‐Sulfoximines The most classical routes to access sulfoximines involve the initial introduction of nitrogen or oxygen to sulfides to give, respectively, the corresponding sulfilimines or sulfoxides. Further oxidation of sulfilimines or N‐transfer to sulfoxides provide the corresponding sulfoximines. These simple routes commonly provide N‐protected sulfoximines which require a final deprotection step for the formation of NH‐sulfoximines. The N‐transfer steps have been carried out through metal‐catalyzed or metal‐free processes. In 2004, Bolm and Okamura described an efficient two‐step method for accessing NH‐sulfoximines from sulfoxides. This protocol achieved the synthesis of N‐trifluoroacetylsulfoximines 2 by reacting trifluoroacetamide with iodobenzene diacetate and magnesium oxide with Rh a catalyst (Scheme ). The resulting N‐acyl sulfoximines were readily deprotected with potassium carbonate in methanol affording NH‐sulfoximines 3 in good yields (Scheme ). Notably, an air‐stable rhodium catalyst and a mild oxidant is involved, avoiding the use of hazardous iminating agents such as azido derivatives or the explosive O‐ (mesitylenesulfonyl)hydroxylamine (MSH). The reaction of an optically pure sulfoxide allowed the preparation of the corresponding enantiopure NH‐sulfoximine ( R )‐ 3 a (>99 : 1 er) without any loss of optical purity. Under these conditions, the imination reaction was stereospecific and occurred with retention of configuration at the sulfur center. NH‐Sulfoximines are also accessible through the electrolysis of N‐phtalimido sulfoximines 4 in methanol, using water as the proton source, under electrochemical conditions. The protocol, developed by Yudin and Siu, enabled the preparation of several dialkyl and diarylsulfoximines 3 in good yields (Scheme ). The authors reported the complete conversion of the starting materials, and the strategy avoids metal‐based reagents, catalysts, and toxic oxidants. More recently, novel strategies involving NH‐transfer or the simultaneous one‐pot NH‐ transfer or NH‐ and O‐transfer starting from sulfoxides or sulfides have been introduced, allowing direct access to NH‐sulfoximines without any further deprotection step. Inspired by the recent advances by Falck, Kurti and coworkers in the direct synthesis of NH‐aziridines from olefins, Richards and Ge developed the first rhodium‐catalyzed strategy for the preparation to NH‐sulfoximines directly from sulfoxides. The optimized protocol required 3 equivalents of O‐(2,4‐dinitrophenyl)‐hydroxylamine (DPH) and 2.5 mol% of Rh 2 (esp) 2 in trifluoroethanol (TFE), to obtain NH‐sulfoximines 3 in moderate to excellent yields (Scheme ). The scope of the reaction was broadly explored, as well as the compatibility of some functional groups such as halogens, and acyl groups on the phenyl ring of the starting sulfoxide. Diaryl, dialkyl, and cycloalkyl sulfoximines 3 were prepared with very good yields, and heteroaryl 2‐thiophenyl, 2‐pyridyl sulfoxides were likewise transformed. Moreover, the authors investigated the chemoselectivity of the reaction by reacting phenyl allyl sulfoxide. In this case, the imination reaction was found to be favored over aziridination providing sulfoximine 3 b in 76 % yield. Concerning the mechanism of this N‐transfer strategy, the authors proposed the generation of a reactive Rh‐nitrene intermediate, by the reaction of DPH with Rh 2 (esp) 2 and subsequent loss of dinitrophenol. In 2017, Liang reported the preparation of NH‐sulfoximines 3 from sulfoxides 1 using NaN 3 and Eaton's reagent (P 2 O 5 in methanesulfonic acid) at 50 °C (Scheme ). Very good yields of the corresponding NH‐sulfoximines were obtained employing, 2 equivalents of NaN 3 . Attempts to reduce the amount of Eaton's reagent by using co‐solvents of chloroform or THF, and alternatively by running the reaction in neat methanesulfonic acid, caused a decrease of yields. The reaction was found to be efficient using alkyl‐arylsulfoxides, and good tolerance was proved toward methoxy, cyano, halogens, and other substituents of the phenyl ring. Furthermore, this imination protocol was found to be efficient with aryl, heteroaryl and carbocyclic sulfoxides (Scheme ). However, enantiopure sulfoxides returned a racemic mixture of the corresponding sulfoximines. The proposed mechanism involved an unstable electrophilic aminodiazonium ion H 2 N 3 + able to provide the electrophilic nitrogen upon release of molecular N 2 . However, the role of P 2 O 5 in promoting the imination reaction remains to be clarified. In 2016, we (Luisi and Bull) developed a direct metal‐free method for NH transfer to sulfoxides using ammonium carbamate as inexpensive and easy to handle nitrogen source, in the presence of diacetoxyiodobenzene (DIB) as the oxidant. The reaction could be successfully conducted with different solvents under slightly different conditions (Scheme ). The combination of ammonium carbamate and DIB in polar solvents, such as acetonitrile or methanol, as well as in nonpolar solvents such as toluene, provided excellent yields of the corresponding sulfoximine 3 a from sulfoxide 1 a . Interestingly, the method was readily scalable. The scope of the reaction was very general, working effectively with a wide range of sulfoxides, and the process proceeds with complete retention of configuration at the sulfur atom of enantioenriched sulfoxides. The functional group tolerance of the reaction was shown to be very high, and further demonstrated using Glorius’ robustness screen. Notably, heterocycles bearing basic nitrogen atoms (pyridine, pyrimidine, imidazole) were found to be highly compatible with the imination protocol, while electron‐rich heterocycles such as indole or furan were less tolerated. The mechanism of this NH‐transfer was thoroughly investigated, and we proposed an unprecedented iodonitrene or iminoiodinane as key electrophilic intermediates responsible for the N‐transfer to the sulfur atom. By using a continuous flow‐MS set‐up, mixing of PhI(OAc) 2 and ammonium carbamate revealed the HRMS signals of the short‐lived iminoiodinane (PhI=NH) I and iodonitrene (PhI=N + ) II (Scheme , a). Moreover, the use of 15 N‐labeled ammonium acetate, as the N‐source, resulted into the generation of 15 N‐labeled intermediates I and II . According to the mechanistic investigation, ammonia, deriving from ammonium carbamate, reacts with PhI(OAc) 2 to generate the intermediates iminoiodinane I or iodonitrene II to react with the sulfoxide (Scheme , b). At the time, we proposed both of these as possible intermediates. The direct attack of the sulfoxide at iminoiodinane I would form NH‐sulfoximine 3 and iodobenzene. or iodonitrene II would furnish the iodonium salt III , which collapses towards NH‐sulfoximine 3 after work‐up. However, further developments of these reagents, suggest that the iodonitrene II is the true reagent, which is consistent with the direct and rapid formation of the iodonium salt III in situ. This stereospecific NH transfer to sulfoxides, has been adopted into the manufacturing scale production of ATR Inhibitor AZD6738 (Ceralasertib) from AstraZeneca. Graham et al. reported the preparation of the sulfoximine containing intermediate 6 from the corresponding sulfoxide 5 (Scheme ). The optimized conditions for this process used a reduced amount of PhI(OAc) 2 (2.1 equivalents) at a reaction temperature of 5 °C in a mixed solvent system of MeOH and toluene. This enabled the preparation of 30 kg of the intermediate compound as the HCl salt at 99 % purity. This replaced the earlier development route which used Rh catalyzed NH transfer, using trifluoroacetamide with dichloromethane solvent. Next, Luisi and Bull reported that the combination of a source of ammonia and hypervalent iodine oxidant (DIB) was effective for the direct conversion of sulfides into NH‐sulfoximines by a one‐pot NH‐ and O‐transfer. The remarkable transformation was achieved efficiently on several alkyl, aryl, benzyl, cycloalkyl, heteroaryl sulfides 7 , leading to the corresponding sulfoximines 3 with excellent yields (Scheme ). The method was further validated by using several sources of ammonia (ammonium acetate, NH 3 in methanol, ammonium carbonate) including the cheap and readily available 15 N‐ammonium acetate, which afforded 15 N‐labeled NH‐sulfoximines of biologically relevant compounds such as biotin ( 3 c ), methionine ( 3 d ), and a dipeptide ( 3 e ) (Scheme ). At a similar time, Reboul reported a detailed mechanistic investigation of the one‐pot NH‐ and O‐transfer to sulfides, in an almost identical reaction developed independently. A detailed HRMS and NMR investigation identified sulfanenitrile species V and VI (Scheme ) as key intermediates in the conversion of sulfides into the corresponding NH‐sulfoximines. According to previous observations, the mechanism proposed by Reboul (Scheme ) involved the short‐lived iodonitrene II that reacts with the sulfide to generate the sulfilimine iodonium species IV . Further attack of the methoxy or acetate anion to IV leads to methoxy or acetoxy‐λ 6 ‐sulfanenitriles V or VI respectively. Sulfanenitrile V may undergo nucleophilic attack, operated by methanol, producing dimethylether and the corresponding NH‐sulfoximine 3 . Similarly, sulfanenitrile VI may behave as an acetylating agent reacting either with sulfoximine or methanol, leading to N‐acyl‐sulfoximine 8 and NH‐sulfoximine 3 (Scheme ). The proposed mechanism highlights the roles of both methanol and acetate as oxygen donors. The progress of the reaction was monitored by HRMS analysis, detecting both methoxy‐λ 6 ‐sulfanenitrile V and acetoxy‐λ 6 ‐sulfanenitrile VI . Moreover, combination of 15 N and 2 H labelling, and multinuclear ( 15 N, 13 C, 1 H) NMR experiments supported the proposed mechanism and the role of sulfanenitrile intermediates V and VI . Li and collaborators extended the one‐pot NH‐ and O‐ transfer methodology for accessing NH‐sulfoximines from sulfides 7 (Scheme ). A detailed screening on nitrogen sources, oxidizing agents, and solvents was conducted, identifying ammonium carbonate (1.5 equiv.) and (diacetoxyiodo)benzene (2.3 equiv.) as a suitable combination, for the preparation of NH‐sulfoximines from sulfides, also using methanol as the reaction solvent (Scheme ). Satisfactory results were also achieved by employing ammonium oxalate, ammonium fluoride, ammonium formate, and benzoate. Concerning the oxidant, (bis(trifluoroacetoxy)iodo)benzene, NCS, NBS, molecular iodine, iodosylbenzene, 2‐iodoxybenzoic acid, and 1,3‐dichloro‐5,5‐dimethylhydantoin were found to be ineffective. Zheng and Xu reported a strategy for the synthesis of NH‐sulfoximines 3 starting from sulfides 7 , using the combination of hypervalent iodine (III) reagent and nitrogen source under aqueous conditions (Scheme ). The one pot NH‐ and O‐transfer reaction was conducted in nanomicelles. The authors employed several surfactants (TPGS‐750‐M, PEG‐400, tween 80, Nok) observing better yields using 2 wt% TPGS‐750‐M. Ammonium carbonate was selected as nitrogen source due to the high aqueous solubility. With the aim to develop a more sustainable method, the recycling of the hypervalent iodine (III) reagent was pursued. In particular, the efficient permeation inside the micelles of lipophilic (diacetoxytrifluoro)iodobenzene provided high yields in NH‐sulfoximines. However, concentrated ammonia was required to consume the excess of oxidant promoting dissolution in the aqueous phase of the resulting sulfoximine. The extraction of the aqueous phase with organic solvents allowed recovery of the trifluoroiodobenzene, that could be re‐used upon oxidation with sodium perborate tetrahydrate and trifluoromethanesulfonic acid in acetic acid. This new protocol was found to be efficient with several aryl, heteroaryl, and alkyl sulfides, forming the corresponding NH‐sulfoximines with good to excellent yields. The scalability of the process and the application to biologically relevant compounds were demonstrated. The mechanism of the reaction was proposed to be closely related to that previously reported, forming an iodonitrene intermediate by reacting trifluoroiodosylbenzene with ammonia. Reaction of the iodonitrene intermediate with sulfide affords a sulfilimine which undergo nucleophilic attack of acetate anion or water to release a sulfanenitrile. Finally, the attack of water is expected to occur outside the micelle affording the desired sulfoximine (Scheme ). The one‐pot NH ‐ and O ‐transfer strategy has been rapidly adopted into the armoury of synthetic methods, and employed for the preparation of biologically relevant molecules. In 2019, Reboul reported a novel multistep strategy for the preparation of Atuveciclib, a PTEFb/CDK9 inhibitor, and a promising drug for cancer therapy Reboul described a synthetic approach involving a late‐stage sulfoximination of a sulfide by applying standard reaction conditions (2.1 equiv. of PhI(OAc) 2 , 1.5 equiv. of ammonium carbamate, in methanol at room temperature for 30 minutes). Interestingly, the final product was obtained in 51 % overall yield as racemic mixture (Scheme ). Moreover, enantioenriched ( S )‐Atuveciclib was obtained in a satisfactory 45 % yield using the N‐transfer conditions adopted with sulfoxides. Luisi, Bull, and Rollin developed a straightforward method to access unprecedented glycosyl sulfoximines 10 , via the one‐pot NH‐ and O‐ transfer to anomeric thioglycosides 9 (Scheme ). Peracetylated S ‐methyl‐β‐glucopyranoside, tested as a model substrate, was transformed into the corresponding NH‐sulfoximine by using 2.5 equiv. of iodosylbenzene, and 2 equiv. of ammonium carbamate, in i PrOH at room temperature for 3 h (Scheme ). Methanol was an unsuitable reaction solvent due to a competitive formation of the corresponding O ‐methyl glucopyranoside, likely resulting from displacement of sulfonimidoyl group. The scope of the reaction was explored, disclosing a good tolerance for aryl and cycloalkyl S‐substituents (Scheme ). Remarkably, the reaction proceeds with good to excellent stereoselectivity (dr up to 95 : 5), while a slightly low stereoselectivity (dr=70 : 30) was observed when electron‐withdrawing substituents were installed on the aromatic ring S‐substituent. The stereochemistry at the sulfur atom was established by X‐ray analysis and computational models. The structural variability was additionally explored by modifying the sugar portion, as for peracetylated mannose ( 10 a ), galactose ( 10 b ) and lactose ( 10 c ) (Scheme ). An interesting application of the one pot NH‐ and O‐transfer methodology has been reported by Bräse, who developed the synthesis of bicyclo[1.1.1]pentyl (BCP) sulfoximines 12 starting from the corresponding BCP sulfides 11 (Scheme ). These new structural motifs are of interest in drug discovery as 3D mimics of aromatic rings. The optimal reaction conditions required a large excess of the oxidant (3 equiv. of PhI(OAc) 2 ) and 2 equiv. of ammonium carbonate. The reaction was tolerant to several functional groups furnishing good yields of the corresponding BCP sulfoximines. However, the reaction was sensitive to steric hindrance at the sulfur atom. The protocol was applied to the preparation of p ‐nitrophenyl substituted BCP sulfoximine 12 a , a precursor for the synthesis of a BCP‐analogue of Roniciclib (Scheme ). Fluorinated sulfoximines show interesting applications in synthetic chemistry, as nucleophiles, radical transfer agents, directing groups, and building blocks for liquid crystals preparation. However, efficient methods for accessing fluorinated sulfoximines have been introduced only recently. Reboul and Magnier developed a general approach for the synthesis of S‐fluoroalkylated NH‐sulfoximines 14 from fluoroalkylsulfides 13 . This metal‐free strategy adopts the one pot NH‐ and O‐transfer to sulfides by using ammonium carbamate (1.5 equiv.) as nitrogen source, DIB (2.1 equiv.) as the oxidizing agent, with trifluoroethanol (TFE) as polar and hydrogen‐bond donor solvent (Scheme ). The optimal reaction conditions achieved high conversion of the relatively poorly nucleophilic sulfides but formed a mixture of NH‐sulfoximines 3 and N‐acetyl (N−Ac) sulfoximines 15 . A final deprotection step by treatment with HCl provided the desired fluoroalkylated NH‐sulfoximines. Satisfactory results were obtained with several fluorinated alkyl and aryl sulfides, and the process was scalable up to 12 mmol. The protocol was effective with sulfides bearing a perfluorobutyl, CF 2 Br, CFCl 2 , CF 2 H, and CH 2 F groups. The reaction was subjected to a deep mechanistic investigation by 19 F NMR and HRMS analysis. The reaction with (4‐methoxyphenyl)difluoromethyl thioether as model substrate, was monitored by 19 F NMR, and the signals of NH‐sulfoximine 3 and and NAc‐sulfoximine 15 were observed, as well as those of sulfanenitrile VI and iodonium salt III (Scheme ). An activated nitrene intermediate was proposed that reacted with the sulfide leading to sulfilimine IV . Nucleophilic attack of acetate anion to IV afforded the short‐lived sulfanenitrile VI . The trifluoroethanol solvent was proposed to play an active role in forming sulfoximine 3 , either in reaction with with DIB and/or with the sulfanenitrile giving compounds III and 6 respectively (Scheme ). Very recently, Craven et al. reported several strategies for the preparation the vinyl sulfoximines. Vinyl sulfoximines offer interesting potential as chiral electrophilic warheads in covalent inhibitors, that can also incorporate additional functionality through the nitrogen group to provide fully functionalized probes. Substituted vinyl sulfoximines 17 were generated directly from vinyl sulfides 16 , by NH and O transfer, again indicating the very high chemoselectivity of this reaction (Scheme , a). To form terminal vinyl sulfoximines 19 , given the relative instability of the vinyl sulfide, the sulfoximine group was formed on β‐hydroxysulfides 18 (Scheme , b). Treatment of the β‐hydroxysulfoximine products with MsCl effected elimination to the terminal vinyl sulfoximines. Due to the relevance of thiophene sulfones in the field of photovoltaics, or as fluorophores, and photoswitches, Bolm and co‐workers investigated the synthesis of thiophene NH‐sulfoximines. In order to achieve the contextual imination/oxidation at the sulfur atom of thiophene, the authors applied the one‐pot NH‐ and O‐transfer methodology for the preparation of the corresponding NH‐sulfoximine. By using a large excess of DIB (5 equiv.) and ammonium carbonate (3 equiv.), dibenzothiophene furnished the corresponding NH‐sulfoximine 21 a in 80 % of yield (Scheme ). The reaction was further applied to thiophenes 20 substituted at C2 or C3, obtaining the corresponding NH‐sulfoximines 21 in high yields. In 2018, Bolm and coworkers reported a Fe(II)‐catalyzed method for the direct preparation of NH‐sulfoximines from sulfoxides. This strategy involved the use of FeSO 4 /phenanthroline (with a loading from 20 %mol to 40 %mol), and an arylhydroxylamine derivative as the NH‐donor in acetonitrile at 30 °C under argon atmosphere (Scheme ). In this procedure, the use of a bench‐stable aminating agent avoids the use of oxidants. The imination protocol furnished good to high yields (70‐98 %) with several S‐aryl, S‐alkyl substituted sulfoxides 1 (Scheme ). Moreover, the protocol enables the preparation of NH‐sulfoximines 3 bearing various heterocycles (2‐pyridinyl, benzofuranyl, benzothienyl, and indolyl) as the S‐substituents. The authors proposed an iron nitrene complex as key reaction intermediate, to transfer the nitrogen to the sulfur atom of the sulfoxide. Very recently, Willis and co‐workers reported the preparation of NH‐sulfoximines exploiting the generation and trapping of an unprecedented electrophilic sulfinyl nitrenes. The protocol involves the utilization of sulfinylhydroxylamine reagent 22 that provided the reactive sulfinyl nitrene upon treatment with organolithium or Grignard reagents through an N−O bond fragmentation process. The subsequent addition of a different carbon nucleophile enables the preparation of the corresponding sulfoximines 3 in moderate to good yields (Scheme ). The scope of the reaction has been widely explored preparing sulfoximines bearing functionalized aryl, heteroaryl, alkyl, vinyl and allyl substituents. Interestingly, the one‐pot reaction proceeds rapidly, affording the desired products within 16 min in THF at −78 °C. Moreover, the addition of an electrophile after the reaction with the second carbon nucleophile resulted into the direct preparation of N‐functionalized sulfoximines in good yields. An efficient method for the synthesis of enantioenriched NH‐sulfoximines, from optically active cyclic sulfonimidates, has been recently described by Stockman and Moore. The required chiral enantioenriched sulfonimidates were obtained from the corresponding sulfinamides, in turn prepared from sulfinyl chlorides and ( R )‐phenyl glycinol. This was followed by intramolecular cyclization upon treatment with N‐chlorosuccinimide (NCS) or tert ‐butyl hypochlorite ( t BuOCl) and typically separation of the S‐diastereoisomers. The authors optimized the ring opening of sulfonimidates 23 with Grignard reagents en route to sulfoximines 24 (Scheme ). S‐Methyl sulfonimidates furnished a mixture of diastereoisomers of the corresponding sulfoximines due to a competitive elimination causing ring opening and loss of S‐stereochemistry with subsequent attack to the methylene derivative, which resulted in racemization at the sulfur center. On the other hand, S‐aryl sulfonimidates reacted with high stereospecificity, affording sulfoximines as single diastereoisomers with inversion of configuration at the sulfur center. Alkyl, aryl, and heteroaryl (thienyl, pyridyl) Grignard reagents were suitable for sulfonimidate ring‐opening reactions. Removal of the chiral auxiliary upon treatment with oxygen and NaOH in methyl tert ‐butyl ether (MTBE) afforded highly enantioenriched NH‐sulfoximines 3 in good yields (Scheme ). [68] Maruoka and Kano reported a powerful alternative approach, based on the S‐arylation and S‐alkylation of sulfinamides, for the asymmetric synthesis of chiral N ‐pivaloyl sulfoximines (Scheme , a). The sulfur‐chemoselective alkylation was achieved under basic conditions in dioxane using alkyl iodides and bromides, chiral enantioenriched sulfinamides, and in the presence of 15‐crown‐5 ligand. The process allowed the preparation of N‐acylated sulfoximines in good yields and high enantioselectivity. A different approach was needed for the sulfur‐chemoselective arylation of chiral enantioenriched sulfinamides. In this case the S‐aryl substituent was introduced by using a suitable diaryliodonium salt in the presence of a copper catalyst. Once again chiral enantioenriched N‐acylated sulfoximines were obtained in good yields and optical purity. Interestingly, the availability of two protocols for S‐alkylation and arylation allowed access to both enantiomers of a given chiral sulfoximine by the judicious ordering of steps. Moreover, the authors developed effective protocols for the N‐deprotection for preparing highly enantioenriched NH‐sulfoximines 3 (Scheme , b). The potential of this synthetic strategy (S‐alkylation/arylation and deprotection) was demonstrated by the synthesis of an optically active analogue of the COX‐2 inhibitor Vioxx from sulfoximine 25 , and a precursor of the lead compound BAY 1143572 (Scheme , b,c). A highly selective kinetic resolution of racemic sulfoximines was recently developed by Bolm. The protocol employed racemic NH‐sulfoximines, an enal and a suitable chiral N‐heterocyclic carbene (NHC) catalyst. Two NHC catalysts, able to provide both enantiomers of chiral NH‐sulfoximines 3 , were identified for highly selective resolutions. The stereoselective amidation did not require additional acyl transfer agents, and the process could be run on gram scale. The usefulness of the methodology was demonstrated with the preparation of a human Factor Xa inhibitor ‐ T (Scheme ). In 2016, Magnier and Vo‐Thanh reported that a perfluoroalkylated sulfoximidoyl moiety could behave as an ortho ‐directing group in the lithiation‐trapping sequence of (hetero)arenes. Similarly to sulfones and sulfonamides, fluorinated sulfoximines exhibited directing metalation capability participating in the coordination of the lithium ion at the ortho ‐position of aryllithium complexes. Under optimized conditions, ortho ‐lithiation of NH‐sulfoximine 26 occurred by using 2 equivalents of n BuLi, in THF at −50 °C. Reasonably, the first equivalent of base removed the nitrogen proton, likely affecting the kinetics of the ortho ‐lithiation step by the second equivalent of base. Therefore, upon reaction with electrophiles, ortho ‐functionalized sulfoximines 27 were obtained in modest to excellent yields (Scheme ). Several electrophiles including halogens (bromide, fluorine and iodine), azido, and pinacol borane moieties have been introduced with satisfactory results. The lithiation‐trapping sequence with B(OMe) 3 and subsequent reaction with H 2 O 2 , led to the formation of interesting phenolic compound 27 b . Moreover, the stannylation with Bu 3 SnCl and silylation with TMSCl afforded the corresponding sulfoximines 27 a and 27 c in very good yields (Scheme ). Magnier and Anselmi described a modified Stille reaction under microwave conditions for the preparation of ortho ‐vinylaryl‐trifluoromethylated NH‐sulfoximines. Several ortho ‐vinylaryl sulfoximines 29 were obtained via the Pd‐catalyzed reaction of ortho ‐iodoaryl sulfoximines 28 with vinylstannanes (Scheme , a). Similarly, the Pd‐catalyzed Sukuzi‐Miyaura cross coupling of vinylboron compounds with trifluoromethyl ortho ‐iodoaryl NH‐sulfoximines was optimized under microwave conditions. Several functionalized trifluoromethyl‐ aryl‐substituted NH‐sulfoximines 29 were prepared in good yields (Scheme , b). The method represents a robust and effective alternative to access trifluorosubstituted NH‐sulfoximines. The robustness of the protocol was further demonstrated by gram‐scale preparations of trifluoromethylated NH‐sulfoximines without any substantial loss of yield. Vinylation reactions represent an important tactic in organic synthesis. Vinyl sulfoximines have been widely exploited as chiral auxiliaries, ligands, Michael acceptors, as dienophiles in pericyclic reactions, and as precursors for the synthesis of allylic sulfoximines. The main approaches for the preparation of vinyl sulfoximines involve the hydroxyalkylation‐elimination of metalated alkyl sulfoximines and the carbometalation of alkynyl sulfoximines. In 2016, a new route to access vinyl NH‐sulfoximines was developed by Arvidsson and Naicker, who explored the reaction of diethyl(arylsulfonimidoylmethyl)phosphonates 30 with aldehydes, under Horner‐Wadsworth‐Emmons (HWE) conditions (Scheme ). Performing the reaction at −78 °C with n‐ BuLi, the desired vinyl‐NH‐sulfoximines 31 can be obtained with complete E ‐selectivity. Several functionalized aromatic, aliphatic aldehydes subjected to the HWE protocol, afforded the desired products in excellent yields. Interestingly, this approach is directly applicable to NH‐sulfoximines, avoiding additional protection/deprotection steps. As reported by Bharatam et al., the nature of the S=N double bond consists into a single covalent bond and a strong ionic interaction without any substantial π ‐overlap. Consequently, n BuLi is expected to abstract the proton of the more acidic activated methylene group without reacting with the NH group.
Flow Technology Applications in the Synthesis of NH‐Sulfoximines The use of flow technology in the development of safer, cleaner, and more sustainable synthetic methodologies encompass procedures for the preparation of NH‐sulfoximines. In 2015, Kappe and coworkers reported the development of a continuous flow protocol for the direct synthesis of NH‐sulfoximine 23 , an intermediate in the early process routes for the synthesis of ATR kinase inhibitor AZD6738. The low yields, poor selectivity, the formation of different side‐products and the safety concerns encountered using the conventional batch approach, led the authors to explore a continuous flow protocol. A mixture of sulfoxide and azidotrimethylsilane (TMSN 3 ) and fuming sulfuric acid were introduced through two different feeds into a coil reactor at 50 °C (Scheme ). For the in‐line quenching and extraction, water and dichloromethane were used. In striking contrast to batch protocol, the flow reaction using fuming sulfuric acid afforded the corresponding sulfoximine with 90 % selectivity after only 10 to 15 min of reaction time at 50 °C. However, racemization of the resulting NH‐sulfoximine 33 occurred under the strongly acidic conditions. As described by Olah, the protonation of hydrazoic acid in superacids afforded the H 2 N 3 + species that acts as a strong electrophilic agent in the reaction with a sulfoxide. Luisi optimized the one pot O‐ and NH‐transfer protocol on sulfides and the NH‐transfer to sulfoxides by using flow devices. The optimization study on methylphenyl sulfide was carried out into a Vapourtec R2 system equipped with a 10 mL PTFE reactor and 2 mL PTFE loops (Scheme , a). In order to avoid the risk of precipitation, an adapted concentration of 0.2 M of sulfide in methanol was employed. The initial screening of the solubility of nitrogen source and oxidant in different solvents was needed to avoid clogging. Under flow conditions, ammonium carbamate was difficult to handle, due to its high tendency to decompose, while ammonium carbonate dissolved slowly in methanol and the resulting solution needed to be filtered. Ammonium acetate and aqueous ammonia were found as the suitable ammonia sources. In the presence of 2 equivalents of PhI(OAc) 2 , 2 equivalents of NH 3(aq) , with 15 minutes of residence time at 0 °C, the desired NH‐sulfoximine was obtained in 95 % yield. The use of sulfoxides as substrate required a concentration of 0.4 M. To manage the higher concentrations of PhI(OAc) 2 , and N‐source, a different flow set‐up, consisting in 10 mL PTFE coil reactor and syringe pumps, was realized (Scheme , b). The optimal flow conditions used 2 equivalents of PhI(OAc) 2 , 2 equivalents of N‐source (ammonium acetate or aqueous ammonia), at 0 °C with 30 minutes of residence time. In comparison to batch approach, the use of the flow technology allowed to reduce the equivalents of both oxidant and ammonia source. Moreover, the scope of the flow method was investigated considering the nature of the S‐substituent as well as the functional group tolerance. The continuous flow synthesis of biologically relevant methionine sulfoximine (MTO) and enantioenriched sulfoximine ( R )‐ 3 a was reported. It is worth mentioning that the flow protocol was tested in a long run continuous flow synthesis observing a productivity of 1.34 g/h for phenyl methyl sulfoximine 3 g .
Recent Developments in the Functionalization of NH‐Sulfoximines The availability of new robust and effective methods to access NH‐sulfoximines boosted the development of methods for their functionalization. Recent advances in the field will be highlighted in this section, focusing on selected examples. In particular, the recently developed protocols for N‐sulfonylation, sulfenylation, phosphorylation, acylation, vinylation, arylation, cross‐coupling, and cyclization will be covered. Moreover, recent progresses in the use of NH‐sulfoximines for the synthesis of heterocycles and for the preparation of new hypervalent iodine reagents will be discussed. 4.1 N‐Sulfonylation, sulfenylation, and phosphorylation The development of synthetic strategies for the preparation of N‐sulfonyl sulfoximines is desirable, as these compounds have been disclosed as efficient chiral auxilaries. [15(b)] Zeng and coworkers described the synthesis of N‐sulfonyl sulfoximines 34 via oxidative N−S bond formation by coupling of NH‐sulfoximines and sodium alkyl‐sulfinates. The protocol required I 2 (0.2 equiv.) as the catalyst, H 2 O 2 as the oxidant in water at room temperature (Scheme , a). The reaction furnished good yields using varied aryl sulfonates coupled with substituted alkyl, aryl, and dialkyl sulfoximines 3 . The authors supposed that radical species might be involved in the process, as the addition of an excess of TEMPO, as the radical scavenger, inhibited the reaction. The proposed mechanism starts with the reaction of phenylsulfinate with radical iodine to form a S‐centered radical, which subsequently reacts with NH‐sulfoximine to give the desired N‐sulfonyl sulfoximine 34 a (Scheme , b). Molecular iodine (I 2 ) is restored from HI by oxidation with H 2 O 2 or molecular oxygen. In contrast to N‐sulfonylation, N‐sulfenylation of NH‐sulfoximine has been poorly explored, and the conventional routes had limitations due to the use of hazardous reagents. However, only recently, elegant and efficient methods have been developed. In 2018, Zeng and coworkers reported a metal‐free, iodine catalyzed N−H/S−H dehydrocoupling reaction between NH‐sulfoximines 3 and thiols to afford N‐sulfenylsulfoximines 35 (Scheme , a). The reaction occurred with high yields in the presence of I 2 as the catalyst, and H 2 O 2 as the oxidant in PEG400 at 50 °C. Non‐toxic reaction medium, high atom‐economy, and functional group tolerance characterized this methodology. Wu and Guo described a metal‐catalyzed synthesis of N‐sulfenylsulfoximines by reacting NH‐sulfoximines and thiophenols. In this process, the reaction proceeded with 2 equivalents of thiophenol and 20 mol% of [Cu(DMAP) 4 I]I as the catalyst at room temperature. A variety of NH‐sulfoximines and thiols were tested, and the methodology exhibited a very good functional group tolerance, providing moderate to good yields of the desired products 35 (Scheme , b). The same authors developed a sustainable preparation of N‐sulfenylsulfoximines 35 by reacting NH‐sulfoximines 3 and N‐(phenylthiol)succinimides in water (Scheme , c). The presence of the commercial additive tween 80 in the reaction media allowed the preparation of the desired products in good to excellent yields and the reaction exhibited a good functional group tolerance. An efficient route to access N‐phosphorylated sulfoximines under mild conditions was recently reported by Kandasamy and coworkers. NH‐sulfoximines underwent N−P coupling with dialkyl phosphites in the presence of Cu(OAc) 2 as the catalyst, triethylamine as the base in toluene at 110 °C, and in the presence of molecular sieves (Scheme ). High yields of N‐phosphorylated sulfoximines 36 could be obtained from heteroaromatics and dialkyl sulfoximines 3 . Moreover, the reaction was not inhibited by the addition of a radical scavenger, suggesting that the reaction did not proceed by a radical pathway. 4.2 N‐Acylation Over the last few decades, the renewed interest in N‐acylated sulfoximines, prompted several research groups to develop novel and efficient N‐aroylation strategies. In fact, N‐acylated sulfoximines have been recently used as directing groups for C−H bond activation, and introduced as a structural motif in bioactive pseudopeptides. Sulfoximine‐promoted C−H activation and annulation strategies have enabled the construction of interesting structural motif as π‐conjugated polycyclic amides, spiro‐isoquinolones, pyranoisoquinolines, and oxepino‐pyridines, among others. From a synthetic point of view, the most traditional approach for N‐acylation of sulfoximines involved the use of activated acyl chlorides. In 2016, Sekar reported the synthesis of N‐aroylated sulfoximines from methylarenes as aroyl sources and NH‐sulfoximines under iron(II) catalysis. The optimal conditions required FeSO 4 •7H 2 O as the catalyst, TBHP as the oxidant, NCS in acetonitrile and a temperature of 85 °C (Scheme , a). Methylarenes bearing methyl, methoxy, and nitro groups, also in ortho ‐position, gave the desired N‐aroylated sulfoximines 37 in good yields. No traces of the corresponding products were detected in the presence of methylfuran, methylthiophene, and methylpyridine. Moreover, the scope of the NH‐sulfoximines was investigated, leading to the N‐aroylated products in good to high yields. No product was detected running the reaction in the presence of radical scavengers, demonstrating the radical pathway of the process. Interestingly, the reaction occurred in the presence of N‐chlorosulfoximine, instead of sulfoximine and NCS, indicating its possible involvement in the reaction mechanism. According to the proposed mechanism, the sequence of events begins with the oxidation of toluene to benzaldehyde, the formation of aroyl radical from the latter aldehyde with Fe/TBHP followed by generation of amino radical from N‐chlorosulfoximine. Finally, the formation of the desired product is expected to arise from the reaction of aroyl and amino radicals as shown in Scheme (b). Another strategy for the N‐aroylation of sulfoximines from aryl iodides and bromides was recently reported by Sekar and coworkers. Two protocols were developed: one employed Pd/C catalyst (1 mol%), K 2 CO 3 as the base and proceeds under CO atmosphere, using DMF as the solvent (Scheme ). Alternatively, N‐aroylation was conducted using palladium nanoparticles (Pd‐BNP) as the catalyst, K 2 CO 3 as the base, under CO atmosphere in DMF at 80 °C (Scheme ). Several substituted iodoarenes and NH‐sulfoximines 3 were coupled delivering the desired N‐aroylsulfoximines 37 in good to excellent yields. The proposed mechanism involves the Pd(0) oxidative addition to the aryl halide, followed by CO insertion, nucleophilic attack of sulfoximine, and the final reductive elimination. Moreover, one of the main advantages of these procedures is represented by the recyclability, up to six times, of Pd/C or Pd‐BNP catalysts without significant loss of efficiency, and without leaching or residual metal contamination in the final product. The direct acylation of NH‐sulfoximines can be also performed with aldehydes under N‐heterocyclic carbene (NHC)‐catalysis, as reported by Guin. Good to excellent yields of N‐acyl sulfoximines 38 were obtained using thiazolium salt T1 , DBU as the base, bisquinone O1 as the oxidant, in the presence of molecular sieves (Scheme , a). The reaction performed well using substituted NH‐sulfoximines and aromatic, heteroaromatic, aliphatic and α,β‐unsaturated aldehydes. The mechanism may involve the catalytic generation of a redox‐active acyl donor intermediate from aldehyde, which reacted with NH‐sulfoximine to furnish the expected N‐acyl derivative. Interestingly, the acylation reaction on NH‐sulfoximines with arylaldehydes can be otherwise performed upon microwave irradiation and in the presence of NBS, as recently reported by Naicker, Arvidsson and coworkers (Scheme , b). A visible‐light promoted method for the synthesis of N‐aroylsulfoximines 37 from aldehydes has been developed by Zeng. First, S‐methyl‐S‐phenylsulfoximine and p ‐nitrobenzaldehyde reacted in the presence of a mixture of oxidants TBHP/K 2 S 2 O 8 under air at room temperature and upon irradiation with simulated sunlight (xenon arc lamp), affording the desired N‐aroylsulfoximine 37 a in 80 % yield (Scheme ). The scope in aldehydes and NH‐sulfoximines was subsequently investigated, and the method demonstrated a good tolerance toward several functional groups. Moreover, no racemization occurred under the reaction conditions used for N‐acylation, preserving the chirality of enantiomerically enriched sulfoximines. An alternative approach for the palladium‐catalyzed aroylation of aryl halides with sulfoximines has been reported by Yuan and Kumar, and employed chloroform as the CO precursor. The reaction required Pd(OAc) 2 as the catalyst, DBU, KOH, CHCl 3 for the in situ generation of CO, and proceeds in toluene at 80 °C (Scheme ). The scope of the reaction was investigated by varying NH‐sulfoximines and aryl halides, obtaining the desired products 37 usually in good yields. Reasonably, the reaction mechanism may follow the typical palladium‐catalyzed carbonylative coupling pathway. An interesting method for accessing a wide range of N‐acyl sulfoximines, has been developed by Kandasamy. The imino‐carbonylative acylation of NH‐sulfoximines occurred with aryl iodides in the presence of Mo(CO) 6 as the CO donor, 1,4‐diazabicyclo[2.2.2]octane (DBCO) in 1,4 dioxane at 150 °C. The method showed good tolerance of functional groups, furnishing N‐acylsulfoximines 37 in 61–95 % yield (Scheme ). In 2017, Kumagai reported the direct acylation of NH‐sulfoximines with carboxylic acids. An efficient screening of different parameters led to the identification of 1,3‐dioxa‐5‐aza‐2,4,6‐triborinane (DATB) as the best catalyst for this transformation (Scheme ). The method allowed the preparation of N‐acylsulfoximines 38 in high yields, by employing several functionalized carboxylic acids. In addition, the method was applied to a favorable synthesis of a biologically active compound (Factor Xa inhibitor). Yotphan reported a copper‐catalyzed aroylation of NH‐sulfoximines by using α‐ketoacids as arylating agents. This strategy involved aryl and heteroarylglyoxylic acid derivatives and NH‐sulfoximines in the presence of potassium persulfate (K 2 S 2 O 8 ) as the oxidant in acetonitrile at 75 °C (Scheme ). The reaction performed very well, returning several functionalized N‐acylated sulfoximines 37 in good to excellent yields. Mechanistic investigations in the presence of radical scavengers such as 2,6‐bis(1,1‐dimethylethyl)‐4‐methyl‐phenol (BHT), TEMPO, and hydroquinone, supported the involvement in this process of radical species. Interestingly, the Cu(II) catalysis was mandatory for a successful decarboxylative coupling. Bolm reported recently the synthesis of sulfoximines bearing a α‐ketoester functionality at the nitrogen atom. The strategy involved a one‐pot reaction of NH‐sulfoximines and methoxy(mesyloxy)iodobenzene to afford hypervalent iodine reagents that underwent reaction with cyanoacetates, furnishing the desired products 39 in good yields (Scheme , a). The scope of the reaction was thoroughly explored by structural variation at both sulfoximines and cyanoacetates. In general, the protocol was effective with several aryl and alkyl sulfoximines, and the authors developed a sustainable visible light‐promoted synthesis of N‐α‐ketoacylated sulfoximines 40 under air. In this case, methoxy(phenyl)‐λ 3 ‐iodanyl methanesulfonate was employed as the sulfoximidoyl donor, and reacted with arylalkynes to afford the desired products in very good yields (Scheme , b). A different approach for the N‐functionalization of NH‐sulfoximines, was developed by Chen and coworkers. The authors reported a Curtius rearrangement‐based approach for the synthesis of sulfonimidoyl ureas 41 under metal‐free conditions (Scheme , a). The reaction enabled a straightforward preparation of sulfonimidoyl ureas by mixing NH‐sulfoximines 3 and acyl azides in acetonitrile at 80 °C. In a similar way, Bolm disclosed the synthesis of sulfoximidoyl carbamates 42 through the reaction of NH‐sulfoximines 3 with Morita‐Baylis‐Hillman carbonates in the presence of triethylamine and o ‐hydroxybenzoic acid in acetonitrile at 50 °C (Scheme , b). The proposed mechanism involves the base promoted decarboxylation of the starting carbonate followed by the deprotonation of NH sulfoximine from tert ‐butylate leading to the ionic couple A1 (scheme , b). The anionic sulfoximine is supposed to attack a second molecule of carbonate, activated by the coordination of o ‐HBA, affording the product and restoring the initial tert ‐butylate ammonium salt A0 . 4.3 Preparation of N‐halogen sulfoximines N‐halogen sulfoximines are useful reagents for functionalizations of the nitrogen atom. Some efficient strategies for the synthesis of N‐halogen sulfoximines have been recently developed. In 2014, Bolm and coworkers described the preparation of N‐chloro sulfoximines 43 from NH‐sulfoximines 3 upon treatment with N‐chloro succinimide (Scheme , a). Similarly, N‐bromination can be performed with N‐bromo succinimide (Scheme , b), [116] and N‐iodo sulfoximines 46 can be prepared with N‐iodo succinimide or molecular iodine (Scheme , c). Moreover, the transformation of N‐chloro and N‐bromo sulfoximines towards N‐aroylated sulfoximines 37 and N‐trifluoromethylthiolated sulfoximines 45 , respectively, have been reported (Scheme , a and b). The preparation of novel hypervalent iodine (III) reagents through ligand exchange of NH‐sulfoximines with methoxy(tosyloxy)iodobenzene (MTIB) in acetonitrile has been recently documented by Bolm and coworkers. [18(g),28,118] The iodonium salts 47 were achieved in excellent yields by reacting different NH‐sulfoximines (Scheme ). These compounds exhibit satisfactory stability at room temperature in the solid state, and in solution over an extended reaction time. Moreover, the hypervalent iodine (III) reagents 47 have been subsequently transformed with alkynes in the presence of DBU, affording N‐ alkynylated sulfoximines 48 in moderate to good yields. In 2017, Bolm et al. reported the preparation of 1‐sulfoximidoyl‐1,2‐benziodoxoles 49 from NH‐sulfoximines 3 and benziodoxole triflate. The reaction proceeds in acetonitrile with 3 equivalents of sulfoximines at room temperature, and several S,S‐dialkyl, S,S‐diaryl and S‐alkyl‐S‐aryl sulfoximines have been successfully transformed in high yields (Scheme ). Interestingly, the hypervalent iodine reagents exhibit a satisfactory stability. In fact, no decomposition was observed storing a solid sample hypervalent iodine (III) reagents at room temperature for five days and at 50 °C for 12 h. Similarly, the products remained stable when dissolved in halogenated and alcoholic deuterated solvents, in deuterated DMSO and heavy water. Recently, some examples of sulfoximines incorporated into hypervalent iodine reagents have been reported. In 2019 Togni and Magnier described the synthesis of hypervalent iodosulfoximine reagent 51 from S‐2‐iodophenyl‐S‐trifluorimethyl NH‐sulfoximine 28 a (Scheme , a). The transformation proceeds in three steps through an isolable chloroiodane 50 which could be crystalized as enantiopure form ( S )‐ 50 . Notably, hypervalent reagent 51 acts as an efficient trifluormethyl transfer reagent. In a similar fashion, Wirth reported the synthesis of optically active hypervalent iodine reagent ( S )‐ 52 by reacting ( S )‐S‐2‐iodophenyl‐S‐methyl NH‐sulfoximine 28 b with sodium perborate (Scheme , b). 4.4 N‐β‐Fluoroalkylation Very recently, Bolm reported the in situ preparation of fluorinated sulfonimidoyl hypervalent iodine (III) reagents 53 , that reacted under photocatalytic conditions with styrenes to form N‐fluoroalkyl sulfoximines (Scheme , a). Diverse N‐fluoroalkyl sulfoximines 54 were prepared with high yields and regioselectivity under mild reaction conditions. The optimized one‐pot protocol used a ruthenium photocatalyst, and the scope of the reaction was widely explored by using several functionalized NH‐sulfoximines 3 and styrene derivatives (Scheme , a). The proposed mechanism involved the in situ generation of 53 which underwent N−I bond cleavage by a single electron transfer (SET) operated by the excited photocatalyst (PC*) (Scheme , b). Subsequently, the N‐centered sulfoximidoyl radical undergoes the regioselective addition to the double bond of the styrene reagent, forming a benzylic radical. Further oxidation, promoted by the photocatalyst (PC+), leads to the corresponding benzyl cation able to react with fluorine anion to furnish the final product, regenerating the ground state of the photocatalyst (PC). 4.5 N‐Arylation The importance of N‐aryl sulfoximines and their derivatives relies in their use as potent chiral ligands. Several methodologies for the N‐arylation of NH‐sulfoximines have been reported in the last decade. In particular, this N‐functionalization can be achieved using different arylating agents as aryl halides, aryl triflates, aryl boronic acids, aryl siloxanes, diaryl iodonium salts, and arynes. An and Zhang reported a general method for the N‐arylation of NH‐sulfoximines using sodium arylsulfinates as efficient arylating agent. The optimal reaction conditions used Cu(OAc) 2 as inexpensive catalyst, K 2 CO 3 as the base in DMSO at 120 °C (Scheme ). The protocol was applied to several aryl NH‐sulfoximines 3 and arylsulfinates combinations, affording the desired N‐arylated sulfoximines 55 in good to excellent yields. Interestingly, the reaction proceeds with the same efficiency under both O 2 or Ar atmosphere, and the yield is not affected by the presence of TEMPO, demonstrating that the reaction is unlikely to proceed through a radical pathway. König and Wimmerer developed the N‐arylation of NH‐sulfoximines with electron‐rich arenes under visible‐light oxidative photoredox catalysis. The reaction proceeds with 9‐mesityl‐10‐methylacridinium perchlorate as the organic photocatalyst, Co(dmgH) 2 PyCl as catalyst in degassed acetonitrile under N 2 atmosphere and upon irradiation with blue light at 455 nm for 20 h at 25 °C (Scheme ). A series of mono‐ and multi‐alkylated and halogenated arenes reacted with a broad range of aromatic and aliphatic electron‐rich and electron‐poor NH‐sulfoximines 3 with satisfactory yields. Moreover, the mechanistic investigation showed that both arenes and NH‐sulfoximines were photo‐oxidized to their corresponding radical intermediates, that underwent radical‐radical cross‐coupling reactions, leading to N‐arylated sulfoximines 55 . In 2018, Kwong reported a palladium catalyzed N‐arylation of NH‐sulfoximines by using aryl sulfonates. The reaction involves Pd(OAc) 2 as the catalyst, MeO‐CM‐phos as the ligand, K 2 CO 3 as the base in t‐ BuOH as the solvent (Scheme ). Several aryl and alkenyl tosylates or mesylates were found to be suitable partners, and the reaction tolerated several functional groups as sulfoximine substituents giving N‐aroylated sulfoximines 55 in moderate to excellent yield. An and Dong, developed a N‐arylation method that involved the use of arylhydrazine hydrochlorides under copper(I) catalysis. The strategy requires CuBr as the catalyst, KOAc as the base, acetone as the solvent, under O 2 atmosphere (Scheme ). Under optimized conditions, several S‐methyl‐S‐tolylsulfoximines could be N‐arylated furnishing products 56 in good yields. Moreover, a wide array of ortho ‐, meta ‐ and para ‐substituted arylhydrazines with electron‐donating or withdrawing groups were compatible with this method. Mechanistic experiments suggested a radical pathway for this N‐arylation process. Very recently, König and Wimmer developed the N‐arylation of sulfoximines via dual nickel photocatalysis. The optimized protocol used an iridium photocatalyst ([Ir‐(ppy) 2 (dtbbpy)]PF 6 ), NiBr 2 as the second metal catalyst, and dtbbpy as ligand, TMG (1,1,3,3‐tetramethylguanidine) as the base, and irradiation at 455 nm (Scheme ). Bromo arenes bearing different functional groups such as thioethers, amides, carbamates, as well as brominated pyrimidines, pyrazines, and quinolines were competent reaction partners, affording the desired products 55 in moderate to excellent yields. Alkyl as well as aryl NH‐sulfoximines 3 were found to be suitable for this N‐arylation reaction. No racemization was observed when the reaction was performed on enantiopure NH‐sulfoximines. Moreover, a scalability test in a custom‐made reactor was carried out on a preparative scale of 27 mmol, obtaining sulfoximine 55 a without any loss of yield. The nickel‐catalyzed N‐arylation of NH‐sulfoximines with aryl halides via paired electrolysis has been reported recently by Mey and co‐workers. The reaction proceeds with aryl bromides and chlorides, and affords the products 55 in good to excellent yields (Scheme ). Moreover, the mild reaction conditions are compatible with various functional groups, and the protocol is reported to be robust and operationally simple. In fact, several pharmaceutical agents have been transformed, enabling the preparation of the corresponding sulfoximines, and giving examples of efficient late stage functionalization reaction on complex substrates. In 2016, Singh and co‐workers developed a sulfoximination of electron‐deficient heteroarenes. The strategy involves the use of isoquinoline‐N‐oxide and different NH‐sulfoximines in the presence of PyBroP (bromo tripyrrolidinophosphonium hexafluorophosphate) as the N−O bond activating agent, and diisopropylethylamine (DIPEA) as the base (Scheme ). Good to high yields of corresponding N‐arylated products 57 were obtained using several substituted sulfoximines. This reaction is also efficient using various quinolines and pyridines, as well as with 1,10‐phenanthroline, 2,2’‐bipyridine, and quinine. In addition, the reaction with chiral optically active sulfoximines afforded the corresponding products with high stereocontrol (ee >99 %). In 2018, Yotphan developed a methodology for the direct installation of the sulfoximine group at C3 position of quinoxalinone substrates. The method required the use of 1 equiv. of quinoxalinone, 2 equiv. of NH‐sulfoximine, K 2 S 2 O 8 as the oxidant in acetonitrile at 60 °C (Scheme ). The coupling products 58 were prepared in moderate to high yields, and preliminary studies on the reaction mechanism suggested a radical pathway. Due to the increasing interest in imidazo[1,2‐a]pyridines, a structural unit found in many natural and pharmaceutical products, Wu disclosed an oxidative strategy for the C−H sulfoximination of imidazopyridines. The reaction occurred in the presence of functionalized imidazopyridines and NH‐sulfoximines, using PhI(OAc) 2 in DMSO at 30 °C for 3 h and afforded the desired products 59 in poor to high yield (Scheme ). The reaction mechanism is supposed to involve a radical pathway as described for the preparation of compound 59 a from NH‐sulfoximine 3 ac (Scheme ). Multicomponent reactions represent desirable strategies in organic chemistry, due to their atom economy, multiple‐bond forming efficiency, and the utilization of generally available starting materials. On this path, Song and Xu developed a three‐component reaction which employed NH‐sulfoximines 3 with alkynes, and azides for the direct synthesis of trisubstituted triazolyl sulfoximines 60 (Scheme ). The transformation can be achieved under air and requires CuSCN as the catalyst and MeOLi as the base. The scope of the reaction was explored, highlighting that the electronic properties of the sulfoximine moiety have no significant effect on the reaction yield. On the contrary, electron rich and unsubstituted aryl acetyenes are generally best performing substrates. In addition, satisfactory yields were observed with a broad variety of benzyl azides bearing different functional groups. 4.6 Cyclization reactions NH‐sulfoximines can undergo several inter‐ and intramolecular reactions leading to heterocyclic scaffolds. Most of the intramolecular transformations that allow the preparation of endocyclic S−N heterocycles involve the formation of both a new C−C bond, via C−H activation of S‐aryl sulfoximines, and N−C bond. As a result, the S‐oxides of 1,2‐benzothiazines, dihydro isothiazoles, tetrahydro‐1,2‐thiazines, 1,2‐benzothiazepines, 1,2,4‐thiadiazines and benzoisothiazoles are accessible from NH‐sulfoximines. Moreover, five, six and seven‐membered endocyclic sulfoximines can be afforded through various inter‐ and intramolecular cyclization reactions. In 2015, Bolm and coworkers disclosed the preparation of optically active 1,2‐benzothiazines 61 and 62 from ( S )‐S‐methyl‐S‐phenylsulfoximine 3 h and brominated 3‐aminobenzophenones (Scheme ). The reaction requires copper (I) bromide, 1,2‐dimethylethylenediamine and cesium carbonate, and affords the products 61 and 62 in good yield. Two years later, the same research group developed a strategy for the synthesis of dihydroisothiazole oxides 64 from S‐aryl‐S‐phenylpropyl‐NH‐sulfoximines 63 (Scheme , a). The transformation, a Hofmann‐Löffler‐Freytag type cyclization reaction, needs molecular iodine, diacetoxyiodobenzene and visible light irradiation. Similarly, benzo[ d ]isothiazoles‐1‐oxides 66 can be obtained upon the same reaction conditions from ortho ‐alkyl substituted S‐arylsulfoximines 65 (Scheme , b). Moreover, when ortho ‐alkyl substituted S‐aryl‐S‐phenylpropylsulfoximines were used, the reaction afforded a mixture of dihydroisothiazole oxides and benzo[ d ]isothiazoles‐1‐oxides. In 2016, Bolm reported an efficient method for the halocyclization of NH‐sulfoximines towards the synthesis of S‐oxides of dihydro isothiazoles and tetrahydro‐1,2‐thiazines, in the presence of (diacetoxyiodo) benzene as the oxidant and potassium iodide as the halogen source. The reaction occurred with excellent regio‐ and stereoselectivity affording the corresponding five and six‐membered heterocycles 67 in good to excellent yields (Scheme , a). The interest toward benzothiazepines scaffold, inspired Bolm and co‐workers in developing a new method for the synthesis of 1,2‐benzothiazepine 1‐oxides 68 via a Rh‐catalyzed [4+3] annulations of NH‐sulfoximines with α,β‐unsaturated ketones. A wide range of functional groups were well tolerated, and the heterocyclic products could be obtained in high yields (Scheme , b). Moreover, thiadiazine 1‐oxides 69 could be efficiently prepared by the Cp*Co(III)‐catalyzed reaction of NH‐sulfoximines and 1,4,2‐dioxazol‐5‐ones as reported by Chen (Scheme , c). Bolm developed the synthesis of thiadiazine 1‐oxides from sulfoximines and 1,4,2‐dioxazol‐5‐ones using rhodium catalysis. The reaction proceeds in dichloroethane, affording the desired products 69 in good yields (scheme , d). In 2017, Dong and Li described the synthesis of benzoisothiazole 70 by tandem annulation of NH‐sulfoximines and olefins (Scheme , e). The reaction involves the ortho C−H activation, olefination, and subsequent intramolecular aza‐Michael cyclization. Good yields for the desired products were achieved by using [Cp*RhCl 2 ] 2 as the catalyst, Cu(OAc) 2 . H 2 O as the oxidant, Na 2 CO 3 as the base, and conduction the reaction in DCE at 110 °C. Moreover, the presence of a variety of functional groups was tolerated. Recently, Cramer and coworkers disclosed the enantioselective preparation of S‐chiral 1,2‐benzothiazine via NH‐sulfoximines C−H functionalization with diazoketones catalyzed by optically active Rh(III) cyclopentadienyl‐based complexes (Scheme , a). The reported method proceeds efficiently with a broad range of diazoketones and affords the corresponding products 71 with high enantioselectivity using diverse substituted diarylsulfoximines. Moreover, the selectivity of the reaction was found to be boosted by the presence of a chiral optically active carboxylic acid. The transformation is thought to begin with the coordination of NH‐sulfoximine to the Rh(III) center giving intermediates V1 or V2 , that evolves towards the enantio‐determining ortho ‐C−H activation through a concerted metalation‐deprotonation pathway affording intermediate W (Scheme , a). Subsequently, the coordination of the diazo compound promotes the formation of carbenoid species Y , that undergoes insertion and deprotonation leading to ketone Z , which affords sulfoximine 71 after condensation with loss of water. Reasonably, the coordination of sulfoximines from the oxygen atom would lead to a different complex ( V3 ), that may evolve towards the product with inverted enantio‐selection (ent‐ 71 ). A year later, the same group developed a successful kinetic resolution of aryl alkyl NH‐sulfoximines via the C−H functionalization upon similar conditions (Scheme , b). In this case, a single enantiomer of the starting sulfoximine is efficiently transformed into the corresponding 1,2‐benzothiazine 71 , while the other remains unreacted, and can be isolated in excellent optical purity. Shi and co‐workers reported the preparation of chiral 1,3‐disubstituted‐1λ 4 ‐benzo[ e ][1,2]thiazines 1‐oxides 72 with excellent enantioselectivity from NH‐sulfoximines and α‐carbonyl sulfoxonium ylides upon Ru(II) catalysis (Scheme ). The reaction proceeds through a C−H activation/annulation process and uses chiral binaphthyl monocarboxylic acids as the chiral ligands. The products were thereby obtained in high yields and enantioselectivity by desymmetrization or kinetic resolution. In 2018, Chen and co‐workers developed a facile synthesis of polycyclic sulfoximine derivatives by one‐pot and one‐step annulation reaction, employing NH‐sulfoximines and aryl iodide as substrates, and Pd(OAc) 2 /norbornene (NBE) as catalysts to afford divergent tricyclic dibenzothiazines 73 or eight‐membered fused heterocyclic sulfoximines 74 and 75 (Scheme , a). Operational convenience, excellent selectivity, and good functional groups tolerance characterize this strategy. A similar approach for the formation of fused medium‐sized sulfoximine polyheterocycles 76 has been also reported. The method consists of a multicomponent reactions of NH‐sulfoximines with aryl iodides, and norbornadiene (NBA), in the presence of Pd(dba) 2 as the catalyst, (4‐F‐C 6 H 4 ) 3 P as the phosphine ligand, (Scheme , b). Very recently, a novel one‐pot strategy for the synthesis of various functionalized thiadiazine‐1‐oxides via C−H activation/cyclization between NH‐sulfoximines and N‐alkoxyamides was developed by Dong. High yields of the corresponding products 69 are therefore accessible by using [Cp*IrCl 2 ] 2 and AgSbF 6 as catalysts, in DCE at 140 °C (Scheme , c). In addition, fused isochromeno‐1,2‐benzothiazines 77 are accessible from sulfoximines, as reported by Liu, Li and coworkers (Scheme , d). The reaction involved the use of S‐phenyl sulfoximines and 4‐diazoisochroman‐3‐imine as the substrates, and needed a rhodium (III) catalysis, affording the desired products in moderate to good yield. Novel five‐membered endocyclic sulfoximines can be prepared by the reaction of S‐chloromethyl NH‐sulfoximines 78 and aryl isocyanates, as reported by Li and Ge. The reaction scope was investigated under optimal conditions (with N 2 CO 3 as base in acetonitrile at 70 °C for 20 h), affording the desired products 79 in good to high yields (Scheme , a). The proposed mechanism involves the nucleophilic attach of sulfoximine to isothiocyanate, followed by the intramolecular ring closing reaction from the tautomeric thiol derivative, with loss of HCl. In 2020, Lücking reported the synthesis of five‐, six‐, and seven‐membered cyclic sulfoximines 81 by reacting chloroalkylsulfoximines 80 with an aqueous solution of ammonia at 80 °C (Scheme , b).
N‐Sulfonylation, sulfenylation, and phosphorylation The development of synthetic strategies for the preparation of N‐sulfonyl sulfoximines is desirable, as these compounds have been disclosed as efficient chiral auxilaries. [15(b)] Zeng and coworkers described the synthesis of N‐sulfonyl sulfoximines 34 via oxidative N−S bond formation by coupling of NH‐sulfoximines and sodium alkyl‐sulfinates. The protocol required I 2 (0.2 equiv.) as the catalyst, H 2 O 2 as the oxidant in water at room temperature (Scheme , a). The reaction furnished good yields using varied aryl sulfonates coupled with substituted alkyl, aryl, and dialkyl sulfoximines 3 . The authors supposed that radical species might be involved in the process, as the addition of an excess of TEMPO, as the radical scavenger, inhibited the reaction. The proposed mechanism starts with the reaction of phenylsulfinate with radical iodine to form a S‐centered radical, which subsequently reacts with NH‐sulfoximine to give the desired N‐sulfonyl sulfoximine 34 a (Scheme , b). Molecular iodine (I 2 ) is restored from HI by oxidation with H 2 O 2 or molecular oxygen. In contrast to N‐sulfonylation, N‐sulfenylation of NH‐sulfoximine has been poorly explored, and the conventional routes had limitations due to the use of hazardous reagents. However, only recently, elegant and efficient methods have been developed. In 2018, Zeng and coworkers reported a metal‐free, iodine catalyzed N−H/S−H dehydrocoupling reaction between NH‐sulfoximines 3 and thiols to afford N‐sulfenylsulfoximines 35 (Scheme , a). The reaction occurred with high yields in the presence of I 2 as the catalyst, and H 2 O 2 as the oxidant in PEG400 at 50 °C. Non‐toxic reaction medium, high atom‐economy, and functional group tolerance characterized this methodology. Wu and Guo described a metal‐catalyzed synthesis of N‐sulfenylsulfoximines by reacting NH‐sulfoximines and thiophenols. In this process, the reaction proceeded with 2 equivalents of thiophenol and 20 mol% of [Cu(DMAP) 4 I]I as the catalyst at room temperature. A variety of NH‐sulfoximines and thiols were tested, and the methodology exhibited a very good functional group tolerance, providing moderate to good yields of the desired products 35 (Scheme , b). The same authors developed a sustainable preparation of N‐sulfenylsulfoximines 35 by reacting NH‐sulfoximines 3 and N‐(phenylthiol)succinimides in water (Scheme , c). The presence of the commercial additive tween 80 in the reaction media allowed the preparation of the desired products in good to excellent yields and the reaction exhibited a good functional group tolerance. An efficient route to access N‐phosphorylated sulfoximines under mild conditions was recently reported by Kandasamy and coworkers. NH‐sulfoximines underwent N−P coupling with dialkyl phosphites in the presence of Cu(OAc) 2 as the catalyst, triethylamine as the base in toluene at 110 °C, and in the presence of molecular sieves (Scheme ). High yields of N‐phosphorylated sulfoximines 36 could be obtained from heteroaromatics and dialkyl sulfoximines 3 . Moreover, the reaction was not inhibited by the addition of a radical scavenger, suggesting that the reaction did not proceed by a radical pathway.
N‐Acylation Over the last few decades, the renewed interest in N‐acylated sulfoximines, prompted several research groups to develop novel and efficient N‐aroylation strategies. In fact, N‐acylated sulfoximines have been recently used as directing groups for C−H bond activation, and introduced as a structural motif in bioactive pseudopeptides. Sulfoximine‐promoted C−H activation and annulation strategies have enabled the construction of interesting structural motif as π‐conjugated polycyclic amides, spiro‐isoquinolones, pyranoisoquinolines, and oxepino‐pyridines, among others. From a synthetic point of view, the most traditional approach for N‐acylation of sulfoximines involved the use of activated acyl chlorides. In 2016, Sekar reported the synthesis of N‐aroylated sulfoximines from methylarenes as aroyl sources and NH‐sulfoximines under iron(II) catalysis. The optimal conditions required FeSO 4 •7H 2 O as the catalyst, TBHP as the oxidant, NCS in acetonitrile and a temperature of 85 °C (Scheme , a). Methylarenes bearing methyl, methoxy, and nitro groups, also in ortho ‐position, gave the desired N‐aroylated sulfoximines 37 in good yields. No traces of the corresponding products were detected in the presence of methylfuran, methylthiophene, and methylpyridine. Moreover, the scope of the NH‐sulfoximines was investigated, leading to the N‐aroylated products in good to high yields. No product was detected running the reaction in the presence of radical scavengers, demonstrating the radical pathway of the process. Interestingly, the reaction occurred in the presence of N‐chlorosulfoximine, instead of sulfoximine and NCS, indicating its possible involvement in the reaction mechanism. According to the proposed mechanism, the sequence of events begins with the oxidation of toluene to benzaldehyde, the formation of aroyl radical from the latter aldehyde with Fe/TBHP followed by generation of amino radical from N‐chlorosulfoximine. Finally, the formation of the desired product is expected to arise from the reaction of aroyl and amino radicals as shown in Scheme (b). Another strategy for the N‐aroylation of sulfoximines from aryl iodides and bromides was recently reported by Sekar and coworkers. Two protocols were developed: one employed Pd/C catalyst (1 mol%), K 2 CO 3 as the base and proceeds under CO atmosphere, using DMF as the solvent (Scheme ). Alternatively, N‐aroylation was conducted using palladium nanoparticles (Pd‐BNP) as the catalyst, K 2 CO 3 as the base, under CO atmosphere in DMF at 80 °C (Scheme ). Several substituted iodoarenes and NH‐sulfoximines 3 were coupled delivering the desired N‐aroylsulfoximines 37 in good to excellent yields. The proposed mechanism involves the Pd(0) oxidative addition to the aryl halide, followed by CO insertion, nucleophilic attack of sulfoximine, and the final reductive elimination. Moreover, one of the main advantages of these procedures is represented by the recyclability, up to six times, of Pd/C or Pd‐BNP catalysts without significant loss of efficiency, and without leaching or residual metal contamination in the final product. The direct acylation of NH‐sulfoximines can be also performed with aldehydes under N‐heterocyclic carbene (NHC)‐catalysis, as reported by Guin. Good to excellent yields of N‐acyl sulfoximines 38 were obtained using thiazolium salt T1 , DBU as the base, bisquinone O1 as the oxidant, in the presence of molecular sieves (Scheme , a). The reaction performed well using substituted NH‐sulfoximines and aromatic, heteroaromatic, aliphatic and α,β‐unsaturated aldehydes. The mechanism may involve the catalytic generation of a redox‐active acyl donor intermediate from aldehyde, which reacted with NH‐sulfoximine to furnish the expected N‐acyl derivative. Interestingly, the acylation reaction on NH‐sulfoximines with arylaldehydes can be otherwise performed upon microwave irradiation and in the presence of NBS, as recently reported by Naicker, Arvidsson and coworkers (Scheme , b). A visible‐light promoted method for the synthesis of N‐aroylsulfoximines 37 from aldehydes has been developed by Zeng. First, S‐methyl‐S‐phenylsulfoximine and p ‐nitrobenzaldehyde reacted in the presence of a mixture of oxidants TBHP/K 2 S 2 O 8 under air at room temperature and upon irradiation with simulated sunlight (xenon arc lamp), affording the desired N‐aroylsulfoximine 37 a in 80 % yield (Scheme ). The scope in aldehydes and NH‐sulfoximines was subsequently investigated, and the method demonstrated a good tolerance toward several functional groups. Moreover, no racemization occurred under the reaction conditions used for N‐acylation, preserving the chirality of enantiomerically enriched sulfoximines. An alternative approach for the palladium‐catalyzed aroylation of aryl halides with sulfoximines has been reported by Yuan and Kumar, and employed chloroform as the CO precursor. The reaction required Pd(OAc) 2 as the catalyst, DBU, KOH, CHCl 3 for the in situ generation of CO, and proceeds in toluene at 80 °C (Scheme ). The scope of the reaction was investigated by varying NH‐sulfoximines and aryl halides, obtaining the desired products 37 usually in good yields. Reasonably, the reaction mechanism may follow the typical palladium‐catalyzed carbonylative coupling pathway. An interesting method for accessing a wide range of N‐acyl sulfoximines, has been developed by Kandasamy. The imino‐carbonylative acylation of NH‐sulfoximines occurred with aryl iodides in the presence of Mo(CO) 6 as the CO donor, 1,4‐diazabicyclo[2.2.2]octane (DBCO) in 1,4 dioxane at 150 °C. The method showed good tolerance of functional groups, furnishing N‐acylsulfoximines 37 in 61–95 % yield (Scheme ). In 2017, Kumagai reported the direct acylation of NH‐sulfoximines with carboxylic acids. An efficient screening of different parameters led to the identification of 1,3‐dioxa‐5‐aza‐2,4,6‐triborinane (DATB) as the best catalyst for this transformation (Scheme ). The method allowed the preparation of N‐acylsulfoximines 38 in high yields, by employing several functionalized carboxylic acids. In addition, the method was applied to a favorable synthesis of a biologically active compound (Factor Xa inhibitor). Yotphan reported a copper‐catalyzed aroylation of NH‐sulfoximines by using α‐ketoacids as arylating agents. This strategy involved aryl and heteroarylglyoxylic acid derivatives and NH‐sulfoximines in the presence of potassium persulfate (K 2 S 2 O 8 ) as the oxidant in acetonitrile at 75 °C (Scheme ). The reaction performed very well, returning several functionalized N‐acylated sulfoximines 37 in good to excellent yields. Mechanistic investigations in the presence of radical scavengers such as 2,6‐bis(1,1‐dimethylethyl)‐4‐methyl‐phenol (BHT), TEMPO, and hydroquinone, supported the involvement in this process of radical species. Interestingly, the Cu(II) catalysis was mandatory for a successful decarboxylative coupling. Bolm reported recently the synthesis of sulfoximines bearing a α‐ketoester functionality at the nitrogen atom. The strategy involved a one‐pot reaction of NH‐sulfoximines and methoxy(mesyloxy)iodobenzene to afford hypervalent iodine reagents that underwent reaction with cyanoacetates, furnishing the desired products 39 in good yields (Scheme , a). The scope of the reaction was thoroughly explored by structural variation at both sulfoximines and cyanoacetates. In general, the protocol was effective with several aryl and alkyl sulfoximines, and the authors developed a sustainable visible light‐promoted synthesis of N‐α‐ketoacylated sulfoximines 40 under air. In this case, methoxy(phenyl)‐λ 3 ‐iodanyl methanesulfonate was employed as the sulfoximidoyl donor, and reacted with arylalkynes to afford the desired products in very good yields (Scheme , b). A different approach for the N‐functionalization of NH‐sulfoximines, was developed by Chen and coworkers. The authors reported a Curtius rearrangement‐based approach for the synthesis of sulfonimidoyl ureas 41 under metal‐free conditions (Scheme , a). The reaction enabled a straightforward preparation of sulfonimidoyl ureas by mixing NH‐sulfoximines 3 and acyl azides in acetonitrile at 80 °C. In a similar way, Bolm disclosed the synthesis of sulfoximidoyl carbamates 42 through the reaction of NH‐sulfoximines 3 with Morita‐Baylis‐Hillman carbonates in the presence of triethylamine and o ‐hydroxybenzoic acid in acetonitrile at 50 °C (Scheme , b). The proposed mechanism involves the base promoted decarboxylation of the starting carbonate followed by the deprotonation of NH sulfoximine from tert ‐butylate leading to the ionic couple A1 (scheme , b). The anionic sulfoximine is supposed to attack a second molecule of carbonate, activated by the coordination of o ‐HBA, affording the product and restoring the initial tert ‐butylate ammonium salt A0 .
Preparation of N‐halogen sulfoximines N‐halogen sulfoximines are useful reagents for functionalizations of the nitrogen atom. Some efficient strategies for the synthesis of N‐halogen sulfoximines have been recently developed. In 2014, Bolm and coworkers described the preparation of N‐chloro sulfoximines 43 from NH‐sulfoximines 3 upon treatment with N‐chloro succinimide (Scheme , a). Similarly, N‐bromination can be performed with N‐bromo succinimide (Scheme , b), [116] and N‐iodo sulfoximines 46 can be prepared with N‐iodo succinimide or molecular iodine (Scheme , c). Moreover, the transformation of N‐chloro and N‐bromo sulfoximines towards N‐aroylated sulfoximines 37 and N‐trifluoromethylthiolated sulfoximines 45 , respectively, have been reported (Scheme , a and b). The preparation of novel hypervalent iodine (III) reagents through ligand exchange of NH‐sulfoximines with methoxy(tosyloxy)iodobenzene (MTIB) in acetonitrile has been recently documented by Bolm and coworkers. [18(g),28,118] The iodonium salts 47 were achieved in excellent yields by reacting different NH‐sulfoximines (Scheme ). These compounds exhibit satisfactory stability at room temperature in the solid state, and in solution over an extended reaction time. Moreover, the hypervalent iodine (III) reagents 47 have been subsequently transformed with alkynes in the presence of DBU, affording N‐ alkynylated sulfoximines 48 in moderate to good yields. In 2017, Bolm et al. reported the preparation of 1‐sulfoximidoyl‐1,2‐benziodoxoles 49 from NH‐sulfoximines 3 and benziodoxole triflate. The reaction proceeds in acetonitrile with 3 equivalents of sulfoximines at room temperature, and several S,S‐dialkyl, S,S‐diaryl and S‐alkyl‐S‐aryl sulfoximines have been successfully transformed in high yields (Scheme ). Interestingly, the hypervalent iodine reagents exhibit a satisfactory stability. In fact, no decomposition was observed storing a solid sample hypervalent iodine (III) reagents at room temperature for five days and at 50 °C for 12 h. Similarly, the products remained stable when dissolved in halogenated and alcoholic deuterated solvents, in deuterated DMSO and heavy water. Recently, some examples of sulfoximines incorporated into hypervalent iodine reagents have been reported. In 2019 Togni and Magnier described the synthesis of hypervalent iodosulfoximine reagent 51 from S‐2‐iodophenyl‐S‐trifluorimethyl NH‐sulfoximine 28 a (Scheme , a). The transformation proceeds in three steps through an isolable chloroiodane 50 which could be crystalized as enantiopure form ( S )‐ 50 . Notably, hypervalent reagent 51 acts as an efficient trifluormethyl transfer reagent. In a similar fashion, Wirth reported the synthesis of optically active hypervalent iodine reagent ( S )‐ 52 by reacting ( S )‐S‐2‐iodophenyl‐S‐methyl NH‐sulfoximine 28 b with sodium perborate (Scheme , b).
N‐β‐Fluoroalkylation Very recently, Bolm reported the in situ preparation of fluorinated sulfonimidoyl hypervalent iodine (III) reagents 53 , that reacted under photocatalytic conditions with styrenes to form N‐fluoroalkyl sulfoximines (Scheme , a). Diverse N‐fluoroalkyl sulfoximines 54 were prepared with high yields and regioselectivity under mild reaction conditions. The optimized one‐pot protocol used a ruthenium photocatalyst, and the scope of the reaction was widely explored by using several functionalized NH‐sulfoximines 3 and styrene derivatives (Scheme , a). The proposed mechanism involved the in situ generation of 53 which underwent N−I bond cleavage by a single electron transfer (SET) operated by the excited photocatalyst (PC*) (Scheme , b). Subsequently, the N‐centered sulfoximidoyl radical undergoes the regioselective addition to the double bond of the styrene reagent, forming a benzylic radical. Further oxidation, promoted by the photocatalyst (PC+), leads to the corresponding benzyl cation able to react with fluorine anion to furnish the final product, regenerating the ground state of the photocatalyst (PC).
N‐Arylation The importance of N‐aryl sulfoximines and their derivatives relies in their use as potent chiral ligands. Several methodologies for the N‐arylation of NH‐sulfoximines have been reported in the last decade. In particular, this N‐functionalization can be achieved using different arylating agents as aryl halides, aryl triflates, aryl boronic acids, aryl siloxanes, diaryl iodonium salts, and arynes. An and Zhang reported a general method for the N‐arylation of NH‐sulfoximines using sodium arylsulfinates as efficient arylating agent. The optimal reaction conditions used Cu(OAc) 2 as inexpensive catalyst, K 2 CO 3 as the base in DMSO at 120 °C (Scheme ). The protocol was applied to several aryl NH‐sulfoximines 3 and arylsulfinates combinations, affording the desired N‐arylated sulfoximines 55 in good to excellent yields. Interestingly, the reaction proceeds with the same efficiency under both O 2 or Ar atmosphere, and the yield is not affected by the presence of TEMPO, demonstrating that the reaction is unlikely to proceed through a radical pathway. König and Wimmerer developed the N‐arylation of NH‐sulfoximines with electron‐rich arenes under visible‐light oxidative photoredox catalysis. The reaction proceeds with 9‐mesityl‐10‐methylacridinium perchlorate as the organic photocatalyst, Co(dmgH) 2 PyCl as catalyst in degassed acetonitrile under N 2 atmosphere and upon irradiation with blue light at 455 nm for 20 h at 25 °C (Scheme ). A series of mono‐ and multi‐alkylated and halogenated arenes reacted with a broad range of aromatic and aliphatic electron‐rich and electron‐poor NH‐sulfoximines 3 with satisfactory yields. Moreover, the mechanistic investigation showed that both arenes and NH‐sulfoximines were photo‐oxidized to their corresponding radical intermediates, that underwent radical‐radical cross‐coupling reactions, leading to N‐arylated sulfoximines 55 . In 2018, Kwong reported a palladium catalyzed N‐arylation of NH‐sulfoximines by using aryl sulfonates. The reaction involves Pd(OAc) 2 as the catalyst, MeO‐CM‐phos as the ligand, K 2 CO 3 as the base in t‐ BuOH as the solvent (Scheme ). Several aryl and alkenyl tosylates or mesylates were found to be suitable partners, and the reaction tolerated several functional groups as sulfoximine substituents giving N‐aroylated sulfoximines 55 in moderate to excellent yield. An and Dong, developed a N‐arylation method that involved the use of arylhydrazine hydrochlorides under copper(I) catalysis. The strategy requires CuBr as the catalyst, KOAc as the base, acetone as the solvent, under O 2 atmosphere (Scheme ). Under optimized conditions, several S‐methyl‐S‐tolylsulfoximines could be N‐arylated furnishing products 56 in good yields. Moreover, a wide array of ortho ‐, meta ‐ and para ‐substituted arylhydrazines with electron‐donating or withdrawing groups were compatible with this method. Mechanistic experiments suggested a radical pathway for this N‐arylation process. Very recently, König and Wimmer developed the N‐arylation of sulfoximines via dual nickel photocatalysis. The optimized protocol used an iridium photocatalyst ([Ir‐(ppy) 2 (dtbbpy)]PF 6 ), NiBr 2 as the second metal catalyst, and dtbbpy as ligand, TMG (1,1,3,3‐tetramethylguanidine) as the base, and irradiation at 455 nm (Scheme ). Bromo arenes bearing different functional groups such as thioethers, amides, carbamates, as well as brominated pyrimidines, pyrazines, and quinolines were competent reaction partners, affording the desired products 55 in moderate to excellent yields. Alkyl as well as aryl NH‐sulfoximines 3 were found to be suitable for this N‐arylation reaction. No racemization was observed when the reaction was performed on enantiopure NH‐sulfoximines. Moreover, a scalability test in a custom‐made reactor was carried out on a preparative scale of 27 mmol, obtaining sulfoximine 55 a without any loss of yield. The nickel‐catalyzed N‐arylation of NH‐sulfoximines with aryl halides via paired electrolysis has been reported recently by Mey and co‐workers. The reaction proceeds with aryl bromides and chlorides, and affords the products 55 in good to excellent yields (Scheme ). Moreover, the mild reaction conditions are compatible with various functional groups, and the protocol is reported to be robust and operationally simple. In fact, several pharmaceutical agents have been transformed, enabling the preparation of the corresponding sulfoximines, and giving examples of efficient late stage functionalization reaction on complex substrates. In 2016, Singh and co‐workers developed a sulfoximination of electron‐deficient heteroarenes. The strategy involves the use of isoquinoline‐N‐oxide and different NH‐sulfoximines in the presence of PyBroP (bromo tripyrrolidinophosphonium hexafluorophosphate) as the N−O bond activating agent, and diisopropylethylamine (DIPEA) as the base (Scheme ). Good to high yields of corresponding N‐arylated products 57 were obtained using several substituted sulfoximines. This reaction is also efficient using various quinolines and pyridines, as well as with 1,10‐phenanthroline, 2,2’‐bipyridine, and quinine. In addition, the reaction with chiral optically active sulfoximines afforded the corresponding products with high stereocontrol (ee >99 %). In 2018, Yotphan developed a methodology for the direct installation of the sulfoximine group at C3 position of quinoxalinone substrates. The method required the use of 1 equiv. of quinoxalinone, 2 equiv. of NH‐sulfoximine, K 2 S 2 O 8 as the oxidant in acetonitrile at 60 °C (Scheme ). The coupling products 58 were prepared in moderate to high yields, and preliminary studies on the reaction mechanism suggested a radical pathway. Due to the increasing interest in imidazo[1,2‐a]pyridines, a structural unit found in many natural and pharmaceutical products, Wu disclosed an oxidative strategy for the C−H sulfoximination of imidazopyridines. The reaction occurred in the presence of functionalized imidazopyridines and NH‐sulfoximines, using PhI(OAc) 2 in DMSO at 30 °C for 3 h and afforded the desired products 59 in poor to high yield (Scheme ). The reaction mechanism is supposed to involve a radical pathway as described for the preparation of compound 59 a from NH‐sulfoximine 3 ac (Scheme ). Multicomponent reactions represent desirable strategies in organic chemistry, due to their atom economy, multiple‐bond forming efficiency, and the utilization of generally available starting materials. On this path, Song and Xu developed a three‐component reaction which employed NH‐sulfoximines 3 with alkynes, and azides for the direct synthesis of trisubstituted triazolyl sulfoximines 60 (Scheme ). The transformation can be achieved under air and requires CuSCN as the catalyst and MeOLi as the base. The scope of the reaction was explored, highlighting that the electronic properties of the sulfoximine moiety have no significant effect on the reaction yield. On the contrary, electron rich and unsubstituted aryl acetyenes are generally best performing substrates. In addition, satisfactory yields were observed with a broad variety of benzyl azides bearing different functional groups.
Cyclization reactions NH‐sulfoximines can undergo several inter‐ and intramolecular reactions leading to heterocyclic scaffolds. Most of the intramolecular transformations that allow the preparation of endocyclic S−N heterocycles involve the formation of both a new C−C bond, via C−H activation of S‐aryl sulfoximines, and N−C bond. As a result, the S‐oxides of 1,2‐benzothiazines, dihydro isothiazoles, tetrahydro‐1,2‐thiazines, 1,2‐benzothiazepines, 1,2,4‐thiadiazines and benzoisothiazoles are accessible from NH‐sulfoximines. Moreover, five, six and seven‐membered endocyclic sulfoximines can be afforded through various inter‐ and intramolecular cyclization reactions. In 2015, Bolm and coworkers disclosed the preparation of optically active 1,2‐benzothiazines 61 and 62 from ( S )‐S‐methyl‐S‐phenylsulfoximine 3 h and brominated 3‐aminobenzophenones (Scheme ). The reaction requires copper (I) bromide, 1,2‐dimethylethylenediamine and cesium carbonate, and affords the products 61 and 62 in good yield. Two years later, the same research group developed a strategy for the synthesis of dihydroisothiazole oxides 64 from S‐aryl‐S‐phenylpropyl‐NH‐sulfoximines 63 (Scheme , a). The transformation, a Hofmann‐Löffler‐Freytag type cyclization reaction, needs molecular iodine, diacetoxyiodobenzene and visible light irradiation. Similarly, benzo[ d ]isothiazoles‐1‐oxides 66 can be obtained upon the same reaction conditions from ortho ‐alkyl substituted S‐arylsulfoximines 65 (Scheme , b). Moreover, when ortho ‐alkyl substituted S‐aryl‐S‐phenylpropylsulfoximines were used, the reaction afforded a mixture of dihydroisothiazole oxides and benzo[ d ]isothiazoles‐1‐oxides. In 2016, Bolm reported an efficient method for the halocyclization of NH‐sulfoximines towards the synthesis of S‐oxides of dihydro isothiazoles and tetrahydro‐1,2‐thiazines, in the presence of (diacetoxyiodo) benzene as the oxidant and potassium iodide as the halogen source. The reaction occurred with excellent regio‐ and stereoselectivity affording the corresponding five and six‐membered heterocycles 67 in good to excellent yields (Scheme , a). The interest toward benzothiazepines scaffold, inspired Bolm and co‐workers in developing a new method for the synthesis of 1,2‐benzothiazepine 1‐oxides 68 via a Rh‐catalyzed [4+3] annulations of NH‐sulfoximines with α,β‐unsaturated ketones. A wide range of functional groups were well tolerated, and the heterocyclic products could be obtained in high yields (Scheme , b). Moreover, thiadiazine 1‐oxides 69 could be efficiently prepared by the Cp*Co(III)‐catalyzed reaction of NH‐sulfoximines and 1,4,2‐dioxazol‐5‐ones as reported by Chen (Scheme , c). Bolm developed the synthesis of thiadiazine 1‐oxides from sulfoximines and 1,4,2‐dioxazol‐5‐ones using rhodium catalysis. The reaction proceeds in dichloroethane, affording the desired products 69 in good yields (scheme , d). In 2017, Dong and Li described the synthesis of benzoisothiazole 70 by tandem annulation of NH‐sulfoximines and olefins (Scheme , e). The reaction involves the ortho C−H activation, olefination, and subsequent intramolecular aza‐Michael cyclization. Good yields for the desired products were achieved by using [Cp*RhCl 2 ] 2 as the catalyst, Cu(OAc) 2 . H 2 O as the oxidant, Na 2 CO 3 as the base, and conduction the reaction in DCE at 110 °C. Moreover, the presence of a variety of functional groups was tolerated. Recently, Cramer and coworkers disclosed the enantioselective preparation of S‐chiral 1,2‐benzothiazine via NH‐sulfoximines C−H functionalization with diazoketones catalyzed by optically active Rh(III) cyclopentadienyl‐based complexes (Scheme , a). The reported method proceeds efficiently with a broad range of diazoketones and affords the corresponding products 71 with high enantioselectivity using diverse substituted diarylsulfoximines. Moreover, the selectivity of the reaction was found to be boosted by the presence of a chiral optically active carboxylic acid. The transformation is thought to begin with the coordination of NH‐sulfoximine to the Rh(III) center giving intermediates V1 or V2 , that evolves towards the enantio‐determining ortho ‐C−H activation through a concerted metalation‐deprotonation pathway affording intermediate W (Scheme , a). Subsequently, the coordination of the diazo compound promotes the formation of carbenoid species Y , that undergoes insertion and deprotonation leading to ketone Z , which affords sulfoximine 71 after condensation with loss of water. Reasonably, the coordination of sulfoximines from the oxygen atom would lead to a different complex ( V3 ), that may evolve towards the product with inverted enantio‐selection (ent‐ 71 ). A year later, the same group developed a successful kinetic resolution of aryl alkyl NH‐sulfoximines via the C−H functionalization upon similar conditions (Scheme , b). In this case, a single enantiomer of the starting sulfoximine is efficiently transformed into the corresponding 1,2‐benzothiazine 71 , while the other remains unreacted, and can be isolated in excellent optical purity. Shi and co‐workers reported the preparation of chiral 1,3‐disubstituted‐1λ 4 ‐benzo[ e ][1,2]thiazines 1‐oxides 72 with excellent enantioselectivity from NH‐sulfoximines and α‐carbonyl sulfoxonium ylides upon Ru(II) catalysis (Scheme ). The reaction proceeds through a C−H activation/annulation process and uses chiral binaphthyl monocarboxylic acids as the chiral ligands. The products were thereby obtained in high yields and enantioselectivity by desymmetrization or kinetic resolution. In 2018, Chen and co‐workers developed a facile synthesis of polycyclic sulfoximine derivatives by one‐pot and one‐step annulation reaction, employing NH‐sulfoximines and aryl iodide as substrates, and Pd(OAc) 2 /norbornene (NBE) as catalysts to afford divergent tricyclic dibenzothiazines 73 or eight‐membered fused heterocyclic sulfoximines 74 and 75 (Scheme , a). Operational convenience, excellent selectivity, and good functional groups tolerance characterize this strategy. A similar approach for the formation of fused medium‐sized sulfoximine polyheterocycles 76 has been also reported. The method consists of a multicomponent reactions of NH‐sulfoximines with aryl iodides, and norbornadiene (NBA), in the presence of Pd(dba) 2 as the catalyst, (4‐F‐C 6 H 4 ) 3 P as the phosphine ligand, (Scheme , b). Very recently, a novel one‐pot strategy for the synthesis of various functionalized thiadiazine‐1‐oxides via C−H activation/cyclization between NH‐sulfoximines and N‐alkoxyamides was developed by Dong. High yields of the corresponding products 69 are therefore accessible by using [Cp*IrCl 2 ] 2 and AgSbF 6 as catalysts, in DCE at 140 °C (Scheme , c). In addition, fused isochromeno‐1,2‐benzothiazines 77 are accessible from sulfoximines, as reported by Liu, Li and coworkers (Scheme , d). The reaction involved the use of S‐phenyl sulfoximines and 4‐diazoisochroman‐3‐imine as the substrates, and needed a rhodium (III) catalysis, affording the desired products in moderate to good yield. Novel five‐membered endocyclic sulfoximines can be prepared by the reaction of S‐chloromethyl NH‐sulfoximines 78 and aryl isocyanates, as reported by Li and Ge. The reaction scope was investigated under optimal conditions (with N 2 CO 3 as base in acetonitrile at 70 °C for 20 h), affording the desired products 79 in good to high yields (Scheme , a). The proposed mechanism involves the nucleophilic attach of sulfoximine to isothiocyanate, followed by the intramolecular ring closing reaction from the tautomeric thiol derivative, with loss of HCl. In 2020, Lücking reported the synthesis of five‐, six‐, and seven‐membered cyclic sulfoximines 81 by reacting chloroalkylsulfoximines 80 with an aqueous solution of ammonia at 80 °C (Scheme , b).
Conclusions Sulfoximines, the aza‐analogs of sulfones, have emerged as promising lead compounds in medicinal chemistry, and useful building blocks for organic synthesis and catalysis. We summarized the most recent advances in the field focusing on modern tactics to access NH‐sulfoximines encompassing the most recent methods for their transformation. Selective N−H functionalizations of sulfoximines including metal catalyzed and metal‐free methods of N‐arylation, N‐acylation, N‐phosphorylation, N‐sulfenylation, N‐sulfonylation N‐halogenation, and other useful elaborations of the sulfoximine group, have been collected. The use of more sustainable technology as the flow technology, and the fine control of the stereochemistry at the sulfur center has been discussed. This review mostly considered progress and achievements from 2015 showcasing the importance, and the need of fundamental research in this field. Moreover, many challenges and opportunities are foreseen for the future, and we hope that reading this review will stimulate synthetic chemists to develop research projects including these fascinating aza‐analogues of sulfones.
The authors declare no conflict of interest.
Michael Andresini obtained his M.Sci. degree (summa cum laude) in Chemical Sciences from University of Bari in 2018. After a short experience at BCMaterials (Basque Country, Spain), in 2019 he returned to University of Bari where he joined the PhD program in Drug Sciences under the supervision of Prof. Renzo Luisi. His research activity is focused on the development of synthetic strategies for the preparation of sulfur‐based functional groups and heterocycles, organometallic mediated transformations, and the use of microfluidic technology .
Arianna Tota obtained the M.Sci. (summa cum laude) in Chemistry and Pharmaceutical Technology at the University of Bari (Italy) in 2015. In 2020, she obtained the Ph.D. in Chemical and Molecular Sciences under the supervision of Prof. Renzo Luisi. Her research activity is focused on the electrophilic nitrogen transfer to sulfur and the chemistry of nitrogen‐bearing compounds. In 2019, she has been a visiting scholar at the Department of Synthetic Chemistry and Biological Chemistry, Kyoto University (Japan), working in the group of Prof. Aiichiro Nagaki. During this time, she was involved in the field of flow microreactor technology applied to organometallic chemistry .
Leonardo Degennaro obtained the master degree in Chemistry and Pharmaceutical Technology in 1999 and the PhD in Applied Chemical and Enzymatic Synthesis in 2003. In 2002 he was “visiting scholar” at the University of Groningen under the supervision of Prof. B. L. Feringa. In 2006 he was appointed assistant professor in Organic Chemistry at the Department of Pharmacy of University of Bari. In 2011 he has been “visiting assistant professor” at the University of Kyoto working in the group of Prof. J.‐i. Yoshida. The research activity is aimed at developing new stereocontrolled synthesis by using small heterocycles and organometallic species, and microreactor technology
Dr James Bull is a University Research Fellow at Imperial College London. His research focuses on the development of synthetic and catalytic methods to access medicinally relevant structural motifs and heterocycles. He obtained his MSci degree from the University of Cambridge, then spent a year at GlaxoSmithKline. He returned to University of Cambridge for his PhD with Professor Steven Ley. In 2007 he joined Université de Montréal as a postdoc with Professor André Charette. He started a Ramsay Memorial Fellowship at Imperial College in 2009, an EPSRC Career Acceleration Fellowship in 2011, and in 2016 was awarded a Royal Society University Research Fellowship .
Renzo Luisi is full professor of Organic Chemistry at the University of Bari (Italy). The research activity focuses on the chemistry of hetero‐substituted organolithiums, the development of new synthetic methodologies, and the use of flow technology. He obtained the PhD in 2000 under the guidance of Professor Saverio Florio. He has been visiting student at the Roger Adams Lab at Urbana Champaign in the group of Prof. Peter Beak, and visiting professor at the University of Manchester in the group of Jonathan Clayden. He is RSC fellow and recipient of the 2014 CINMPIS award Innovation in Organic Synthesis .
As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials are peer reviewed and may be re‐organized for online delivery, but are not copy‐edited or typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors. Supporting Information Click here for additional data file.
|
Water chlorination increases the relative abundance of an antibiotic resistance marker in developing sourdough starters | 7884f407-b29b-46e4-b67f-5dda9627de90 | 11537093 | Microbiology[mh] | The use of sourdough starters is considered one of the great advancements in cooking . As the primary bread-leavening agent until the European Industrial Revolution , sourdough starters increased the nutritional value of bread and made the latter a staple food in many parts of the world . More recently, sourdough regained popularity and has increasingly been celebrated for its desirable gastronomic properties Traditional sourdough starters are made from a mixture of flour and water fermented naturally by diverse populations of yeast and bacteria . As the starter ferments, microbial populations that are initially diverse become quickly dominated by lactic acid bacteria and, to a lesser extent, by acetic acid bacteria . The dominant presence of lactic acid bacteria results in chemical, metabolic, and enzymatic activities that not only increase the nutritional value of sourdough bread but also inhibit the growth of other bacterial genera, contributing to the self-preserving properties of sourdough . Multiple factors explain the proper development of microbial communities in sourdough starters . For example, sourdough starters are heavily influenced by temperature variations and possibly by the presence of microorganisms in the air surrounding the sourdough starters . Crucially, the ingredients used in starter generation also play a defining role in shaping sourdough structure . However, the role of water quality, the other main ingredient in sourdough starter preparation, has only recently been recognized as a potential factor shaping microbial communities in sourdough starters . More specifically, the presence of disinfectant residuals commonly used in drinking water is widely believed among professional bakers to negatively impact the proper development of microbial communities found in sourdough starter, potentially changing the flavor profiles of the sourdough . Chlorination is the most common disinfectant used in public water distribution systems . Most often, water chlorination is achieved by adding sodium hypochlorite, which leads to the presence of hypochlorite ion (OCl-) in the medium, inhibiting bacterial growth by disrupting metabolism and enzymatic inactivation . The efficiency with which chlorination can stop the spread of most water-borne pathogens is considered one of the most outstanding achievements in public health of the past century . However, the inhibiting activity of chlorine present in water is not limited to pathogenic bacteria . Indeed, the potential impact of chlorine is predicted to extend to most microbial communities exposed to chlorinated water . The presence of free chlorine in the water could alter the chemical properties of sourdoughs. For example, hypochlorite ion (OCl-) is an oxidizing agent that can break glycosidic bonds within bread starches, reducing the gluten network and subsequently reducing the ability of flour components to gel together during the baking process . Water chlorination can also reduce flour’s lipid content due to the formation of chlorine derivatives . In addition to affecting the gustatory properties of the sourdough, such changes could affect developing microbial communities. Chlorination was also demonstrated to promote the spread of antibiotic resistance . Although chlorination contributes to reducing the number of antibiotic-resistant bacteria in treated water at first , the continued presence of low concentrations of free chlorine in water can select for antibiotic-resistant bacteria downstream from treatment plants and promote the exchange of antibiotic resistance genes among bacteria via horizontal gene transfer . Although the presence of antibiotic-resistance genes in sourdough and sourdough starters is unlikely to be a major health concern, even though DNA is not destroyed during cooking , antibiotic resistance is often associated with spoilage bacteria . Therefore, selecting for antibiotic-resistant bacteria in sourdough could affect the bread’s quality and preservation. Here, using 16S rRNA amplicon sequencing, we investigate the effect of chlorinated water on the development of bacterial communities in sourdough starters. In addition, we monitor the possible effect of water chlorination on the spread of integron 1, an important genetic element associated with the spread of antibiotic resistance in pathogenic bacteria . Although we show that water chlorination has a limited impact on the overall bacterial community structure developing in sourdough starters, we found that chlorinated water increased the abundance of integron 1, an indicator associated with clinically important antibiotic resistance genes, pathogenic bacteria, and spoilage bacteria.
Establishment of sourdough starters Sourdough starters are composed of two ingredients, flour and water, mixed together and regularly replenished to favor microbial growth. To control for possible variation in flour composition, we used a single bag of organic, stone-ground whole wheat flour (King Arthur Flour, Norwich, VT) for the entire experimental period of sourdough fermentation. According to the manufacturer’s website, the flour is made from dark northern hard red wheat, a varietal of the common wheat ( Triticum aestivum ) that contains a higher protein content (~13.8%). We established the sourdough starters by combining 10 g of flour with 10 mL of control or treated water (see below) in sterilized polypropylene Nalgene bottles. Next, we mixed manually with an ethanol-sterilized glass rod, resulting in a dough yield (DY) of 200, or pastelike ‘consistency’ . We then fed the sourdough starter every 24 hours (±2 hours), commonly called “backslopping,” by discarding 50% of the initial DY, and replenishing with a fresh mixture of flour and water to achive the initial DI. We ensured the complete homogenization of each sourdough starter by pouring the dough into a sterile bag and homogenizing it using the BagMixer 5000 (Interscience, Saint Nom la Brétèche, France) using default parameters for 60 seconds. We then remove 50% of the starter by weight and replace it with a fresh mixture of flour and water, as described above. Next, the dough was scraped down and remixed on the same settings. The freshly fed starter paste was then squeezed into a new sterilized polypropylene bottle. The bottles were placed in a dark cupboard for 24 hours at ambient room temperature maintained at 22–25°C. We repeated the feeding procedure six times for a total fermentation time of 7 days. To identify the optimal growing conditions for investigating the possible effects of water chlorination, we established two independent trials. First, we chose to limit the exposure of the sourdough starters to bacteria in the air by having an “air-tight” container or screwing on the lid tightly. The only times the container lids were removed were to refresh the starter. Second, we conducted a second set of experiments with identical conditions, but this time allowing exposure to air located in a working kitchen. For each experiment, we established three replicate starters for each control and chlorine treatment for a total of 18 experimental populations. All measurements and mixing were done under sterile, aseptic conditions throughout the study. Water chlorination treatments To test for the possible effects of water chlorination on sourdough starters, we established and maintained sourdough starters with three water chlorination treatments resulting in three concentrations of free chlorine in the water: 0 ppm (or control); 0.5 ppm (0.5 mg/L), and 4.0 ppm (4.0 mg/L). The treatments were chosen to reflect the minimum and maximum residual amount of chlorine in finished drinking water in the United States of America (CDC, 2020). We prepared chlorinated waters daily before sourdough feeding by diluting sodium hypochlorite, or NaOCl, into 100 mL of sterilized water. We tested for the water chlorination level in each water preparation using the N, N-dimethylacetamide method as implemented using the LaMotte Chlorine test kit (LaMotte, Baltimore, USA). Briefly, 5 mL of chlorinated water was mixed with N, N diethyl-p-phenylenediamine and was compared with the provided color chart, indicating the available chlorine concentration of the solution. Because we were also concerned with the presence of other organic material in the water interfering with the disinfection efficacy of chlorine concentrations in our treatments, we tested the residual chlorine concentrations using the digital colorimeters method as described in the CDC protocol for measuring residual free chlorine . We confirmed that residual chlorine concentration was stable throughout our experiment. Sample processing We used a 16S rRNA amplicon sequencing approach to characterize the bacterial communities found in each sourdough starter, also known as food microbiomes. We first extracted the bacterial DNA from 1 g of dough, which we diluted in 9 mL sterile, peptone physiological solution (0.1% peptone, 0.85% NaCl). We then extracted microbial DNA from the diluted starter using the procedure described in the MoBio PowerFood DNA Extraction kit (MoBio, Carlsbad, CA). Finally, we amplified the V4 region of the 16S rRNA gene using the Golay-barcoded primers 515F and 806R . Following gel purification, libraries were pooled at equimolar ratios and sequenced on the MiSeq paired-end Illumina platform adapted for 250 bp paired-end reads (Wright Labs, Huntingdon, PA) according to the Earth Microbiome Project’s protocol . All unprocessed sequence reads are available at the Sequence Read Archive of the National Center for Biotechnology Information (NCBI accession number: PRJNA784321 ). Processing of 16S rRNA amplicon sequence data We characterized the microbiomes of each sourdough starter sample by identifying and tabulating the number of different sequence variants, also known as amplicon sequence variants (i.e., ASVs). Sequence variants can then be assigned to a taxonomic rank, usually at the genus level, providing additional information about the biology of each microbiome community. More specifically, we processed the 16S rRNA reads using the DADA2 pipeline version 1.20 available at ( https://github.com/benjjneb/dada2 ) using standard parameters unless specified and implemented in R version 4.1.1 ( http://www.r-project.org ) (see for full details). In total, we obtained a total of 1,160,000 pairs of forward and reverse reads, (excluding eight samples that failed to sequence) with an average read length of 250 base pairs, totaling ~583 G bases, and an average sequencing depth per sample of 41,642.9 paired-reads. Each sequence read was then quality-checked, trimmed (i.e., forward reads at 240 bp and reverse reads at 225 bp), assessed for chimeric contaminants, and de-noised for possible sequencing error. Following quality filtering, we conserved 980,261 (84.1% of the initial) paired-end reads. Taxonomy was assigned using both the DADA2 native taxa identifier function as well as IDTAXA available via the DECIPHER Bioconductor package (DOI: 10.18129/B9.2bioc.DECIPHER ) trained on the SILVA ribosomal RNA gene database version 138.1 as well as the RDP trainset 18 . A complete list of all ASVs and their abundance in each sample can be found in , and a complete taxonomic assignment can be found in . Finally, we build a maximum likelihood phylogenetic tree based on multiple alignments of all the ASVs using the phangorn package version 2.1.3 ; the latter is used to estimate the total phylogenetic, or evolutionary, distance present in each sample. Microbial community analysis Microbiomes’ diversity was analyzed using phyloseq version 1.30.0 (available at https://joey711.github.io/phyloseq/ ) implemented in R and visualized in ggplot2 . A mapping file linking sample names and the different treatments is provided in . To estimate diversity indices, we rarefied all samples to the lowest sampling depth and estimated the total number of ASVs, Chao1, which is the predicted number of ASVs in the whole sample, as well as diversity as Simpson’s Index, that is, D . Although richness considers the total number of ASVs, diversity includes evenness measures among the different ASVs present in a sample. We tested whether richness and diversity indices differed among treatments using linear modeling and comparing the different statistical models with Akaike’s Information Criterion (AIC) as implemented in R’s stats package. To test whether there were statistical differences in population structure between treatments (e.g., control microbiomes vs. microbiomes exposed to chlorinated water as well microbiomes exposed to air vs. microbiomes not exposed to air), we performed Principal Coordinate Analyses (PCoA) on phylogenetic distances calculated as weighted UniFrac distance scores . We used a permutational multivariate analysis of variance (PERMANOVA) implemented via the adonis function of vegan version 2.5.6 to test for significance. The latter is a non-parametric method that estimates F -values from distance matrices among groups and relies on permutations to determine the statistical significance of observed differences among group means. Finally, we confirmed that each test respected the homogeneity of variances assumption using the betadisper method of the vegan package . Quantify the presence of an antibiotic resistance and spoilage bacteria marker Finally, we investigated whether chlorinated water increased the relative abundance of intI1 , a gene encoding integron class 1 integrase . The latter is almost entirely associated with spoilage or potentially harmful bacteria and facilitates the spread of antibiotic-resistant genes among bacteria . As described elsewhere , we quantified 16S rRNA and intI1 gene copy numbers from triplicate reactions using the Bio-Rad CFX96 Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules, CA, USA). We included internal standard curves with each qPCR run to estimate the copy number for both intI1 and 16S rRNA . To normalize our intI1 findings and allow comparisons between samples, we divided the intI1 copy number by the 16S rRNA copy number to provide us with a measure of intI1 relative abundance in each sample. The 16S rRNA copy number was adjusted by dividing it by 4.2, the average number of 16S rRNA copies in each bacteria cell . Finally, we used linear modeling to test for the effect of water chlorination on intI1 relative abundance and compared the different statistical models with Akaike’s Information Criterion (AIC) as implemented in R’s stats package. Although we used the square root-transformed data for statistical analysis, we plotted the raw data.
Sourdough starters are composed of two ingredients, flour and water, mixed together and regularly replenished to favor microbial growth. To control for possible variation in flour composition, we used a single bag of organic, stone-ground whole wheat flour (King Arthur Flour, Norwich, VT) for the entire experimental period of sourdough fermentation. According to the manufacturer’s website, the flour is made from dark northern hard red wheat, a varietal of the common wheat ( Triticum aestivum ) that contains a higher protein content (~13.8%). We established the sourdough starters by combining 10 g of flour with 10 mL of control or treated water (see below) in sterilized polypropylene Nalgene bottles. Next, we mixed manually with an ethanol-sterilized glass rod, resulting in a dough yield (DY) of 200, or pastelike ‘consistency’ . We then fed the sourdough starter every 24 hours (±2 hours), commonly called “backslopping,” by discarding 50% of the initial DY, and replenishing with a fresh mixture of flour and water to achive the initial DI. We ensured the complete homogenization of each sourdough starter by pouring the dough into a sterile bag and homogenizing it using the BagMixer 5000 (Interscience, Saint Nom la Brétèche, France) using default parameters for 60 seconds. We then remove 50% of the starter by weight and replace it with a fresh mixture of flour and water, as described above. Next, the dough was scraped down and remixed on the same settings. The freshly fed starter paste was then squeezed into a new sterilized polypropylene bottle. The bottles were placed in a dark cupboard for 24 hours at ambient room temperature maintained at 22–25°C. We repeated the feeding procedure six times for a total fermentation time of 7 days. To identify the optimal growing conditions for investigating the possible effects of water chlorination, we established two independent trials. First, we chose to limit the exposure of the sourdough starters to bacteria in the air by having an “air-tight” container or screwing on the lid tightly. The only times the container lids were removed were to refresh the starter. Second, we conducted a second set of experiments with identical conditions, but this time allowing exposure to air located in a working kitchen. For each experiment, we established three replicate starters for each control and chlorine treatment for a total of 18 experimental populations. All measurements and mixing were done under sterile, aseptic conditions throughout the study.
To test for the possible effects of water chlorination on sourdough starters, we established and maintained sourdough starters with three water chlorination treatments resulting in three concentrations of free chlorine in the water: 0 ppm (or control); 0.5 ppm (0.5 mg/L), and 4.0 ppm (4.0 mg/L). The treatments were chosen to reflect the minimum and maximum residual amount of chlorine in finished drinking water in the United States of America (CDC, 2020). We prepared chlorinated waters daily before sourdough feeding by diluting sodium hypochlorite, or NaOCl, into 100 mL of sterilized water. We tested for the water chlorination level in each water preparation using the N, N-dimethylacetamide method as implemented using the LaMotte Chlorine test kit (LaMotte, Baltimore, USA). Briefly, 5 mL of chlorinated water was mixed with N, N diethyl-p-phenylenediamine and was compared with the provided color chart, indicating the available chlorine concentration of the solution. Because we were also concerned with the presence of other organic material in the water interfering with the disinfection efficacy of chlorine concentrations in our treatments, we tested the residual chlorine concentrations using the digital colorimeters method as described in the CDC protocol for measuring residual free chlorine . We confirmed that residual chlorine concentration was stable throughout our experiment.
We used a 16S rRNA amplicon sequencing approach to characterize the bacterial communities found in each sourdough starter, also known as food microbiomes. We first extracted the bacterial DNA from 1 g of dough, which we diluted in 9 mL sterile, peptone physiological solution (0.1% peptone, 0.85% NaCl). We then extracted microbial DNA from the diluted starter using the procedure described in the MoBio PowerFood DNA Extraction kit (MoBio, Carlsbad, CA). Finally, we amplified the V4 region of the 16S rRNA gene using the Golay-barcoded primers 515F and 806R . Following gel purification, libraries were pooled at equimolar ratios and sequenced on the MiSeq paired-end Illumina platform adapted for 250 bp paired-end reads (Wright Labs, Huntingdon, PA) according to the Earth Microbiome Project’s protocol . All unprocessed sequence reads are available at the Sequence Read Archive of the National Center for Biotechnology Information (NCBI accession number: PRJNA784321 ).
16S rRNA amplicon sequence data We characterized the microbiomes of each sourdough starter sample by identifying and tabulating the number of different sequence variants, also known as amplicon sequence variants (i.e., ASVs). Sequence variants can then be assigned to a taxonomic rank, usually at the genus level, providing additional information about the biology of each microbiome community. More specifically, we processed the 16S rRNA reads using the DADA2 pipeline version 1.20 available at ( https://github.com/benjjneb/dada2 ) using standard parameters unless specified and implemented in R version 4.1.1 ( http://www.r-project.org ) (see for full details). In total, we obtained a total of 1,160,000 pairs of forward and reverse reads, (excluding eight samples that failed to sequence) with an average read length of 250 base pairs, totaling ~583 G bases, and an average sequencing depth per sample of 41,642.9 paired-reads. Each sequence read was then quality-checked, trimmed (i.e., forward reads at 240 bp and reverse reads at 225 bp), assessed for chimeric contaminants, and de-noised for possible sequencing error. Following quality filtering, we conserved 980,261 (84.1% of the initial) paired-end reads. Taxonomy was assigned using both the DADA2 native taxa identifier function as well as IDTAXA available via the DECIPHER Bioconductor package (DOI: 10.18129/B9.2bioc.DECIPHER ) trained on the SILVA ribosomal RNA gene database version 138.1 as well as the RDP trainset 18 . A complete list of all ASVs and their abundance in each sample can be found in , and a complete taxonomic assignment can be found in . Finally, we build a maximum likelihood phylogenetic tree based on multiple alignments of all the ASVs using the phangorn package version 2.1.3 ; the latter is used to estimate the total phylogenetic, or evolutionary, distance present in each sample.
Microbiomes’ diversity was analyzed using phyloseq version 1.30.0 (available at https://joey711.github.io/phyloseq/ ) implemented in R and visualized in ggplot2 . A mapping file linking sample names and the different treatments is provided in . To estimate diversity indices, we rarefied all samples to the lowest sampling depth and estimated the total number of ASVs, Chao1, which is the predicted number of ASVs in the whole sample, as well as diversity as Simpson’s Index, that is, D . Although richness considers the total number of ASVs, diversity includes evenness measures among the different ASVs present in a sample. We tested whether richness and diversity indices differed among treatments using linear modeling and comparing the different statistical models with Akaike’s Information Criterion (AIC) as implemented in R’s stats package. To test whether there were statistical differences in population structure between treatments (e.g., control microbiomes vs. microbiomes exposed to chlorinated water as well microbiomes exposed to air vs. microbiomes not exposed to air), we performed Principal Coordinate Analyses (PCoA) on phylogenetic distances calculated as weighted UniFrac distance scores . We used a permutational multivariate analysis of variance (PERMANOVA) implemented via the adonis function of vegan version 2.5.6 to test for significance. The latter is a non-parametric method that estimates F -values from distance matrices among groups and relies on permutations to determine the statistical significance of observed differences among group means. Finally, we confirmed that each test respected the homogeneity of variances assumption using the betadisper method of the vegan package .
Finally, we investigated whether chlorinated water increased the relative abundance of intI1 , a gene encoding integron class 1 integrase . The latter is almost entirely associated with spoilage or potentially harmful bacteria and facilitates the spread of antibiotic-resistant genes among bacteria . As described elsewhere , we quantified 16S rRNA and intI1 gene copy numbers from triplicate reactions using the Bio-Rad CFX96 Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules, CA, USA). We included internal standard curves with each qPCR run to estimate the copy number for both intI1 and 16S rRNA . To normalize our intI1 findings and allow comparisons between samples, we divided the intI1 copy number by the 16S rRNA copy number to provide us with a measure of intI1 relative abundance in each sample. The 16S rRNA copy number was adjusted by dividing it by 4.2, the average number of 16S rRNA copies in each bacteria cell . Finally, we used linear modeling to test for the effect of water chlorination on intI1 relative abundance and compared the different statistical models with Akaike’s Information Criterion (AIC) as implemented in R’s stats package. Although we used the square root-transformed data for statistical analysis, we plotted the raw data.
Overall microbial population structure Using 16S rRNA amplicon sequencing to characterize the sourdough starter microbiomes, we identified a total of 150 unique ASVs with a median of nine ASVs found in each starter . The number of ASVs found in each starter varied greatly from four ASVs found in a starter fermented for 1 day to up to 65 ASVs in a starter fermented for 7 days. In total, ASVs were matched to 82 different bacteria genera , with Latilactobacillus being the most common among our samples, 49.4(39.5)%, followed by Pantoea sp., 27.2(39.0)%, Weissella sp., 9.1(23.2)%, and Pseudomonas sp., 8.7(26.7)%. As taxonomy changed slightly depending on the database and algorithm used , we will present our results using the RDP database with the TaxaID taxonomy identifier thereafter; the latter offered the smallest number of unidentified ASVs. Identifying the source of microbial fermentation in new sourdough starters We first wanted to identify the ideal condition to establish sourdough starters in our facilities. To do so, we established two sourdough starter trials: one under sterile laboratory conditions with filtered air and the second in a kitchen environment with exposure to unfiltered air. When comparing the microbial communities found in the sourdoughs after 7 days of fermentation and using the same feeding procedure, we found a significant divergence in the population structure estimated from the weighted UniFrac distance matrix, based on phylogenetic distance, between the two groups ( F (1,16) = 27.7; R 2 = 0.65; adj- P = 0.002; ). Similarly, we also found a significant difference in population structure when comparing the overall dissimilarity in ASV composition between the two groups ( F (1,23 = 15.57; adj- P = 0.002; ). Although the above study could be influenced by a certain degree of heterogeneity in variance between the two groups, we found similar results even when the number of samples was kept constant between the two groups by subsampling, suggesting that there is a real divergence in population structure. Although we did not find significant differences in the number of observed ASVs ( F (1,15) = 5.08; adj- P = 0.16; ) or evenness measured as the Simpson’s Index ( F (1,25) = 4.16; adj- P = 0.18; ), the main difference we observed between the two trials was that the most common ASV found in the presence of unfiltered air was identified as Latilactobacillus sp. . In contrast, the most common ASV found under clean laboratory conditions was identified as Panteoa sp., a genus not commonly associated with sourdough starters. Although the two groups showed some level of heteroscedasticity ( P = 0.04), we found that the significant difference in community structure between communities exposed to unfiltered air and those exposed to laboratory-filtered air helped after subsampling the latter. Finally, we found that the starters grown in laboratory conditions were significantly less likely to result in communities where lactic acid bacteria made up at least 80% of the total identified ASVs (Fisher’s exact test: P = 0.05). For the above reasons, we decided to investigate the possible effect of water chlorination on sourdoughs exposed to air only. The effect of water chlorination on sourdough starters To test for the possible effect of water chlorination on the microbial communities developing in sourdough starters, we exposed starters to three different chlorination treatments, including a control group not exposed to hypochlorite ion. We found no evidence that water chlorination affected the overall structure of microbial communities in sourdough starters. Overall, the same few ASV dominated the populations by day 7 in all treatments . Using principal component analysis to detect possible changes in community structure measured as phylogenetic distance via Unifrac scores, we found that microbial populations changed significantly over time (ADONIS: F (1,15) = 8.01; R 2 = 0.35; adj- P = 0.004) similarly in all water chlorination treatments (ADONIS: F (2,14) = 0.12; R 2 = 0.02; adj- P >0.99; ). In other words, even a chlorine concentration at a level observed in some of the most chlorinated public water systems, that is, 4 ppm, did not modify the relative abundance of most ASVs and taxa observed in the different samples. Similarly, although the number of predicted ASVs observed in the starters decreased over time ( F (2,5) = 0.10; P = 0.90; ), we found that the number of ASVs did not differ between the water chlorination treatments ( F (2,5 = 0.10; P = 0.90; ). We found that chlorine levels did not affect diversity as measured by Simpson’s index ( D ), a measure that is sensitive to how evenly the ASVs are distributed in the samples ( F (2,5) = 0.02; P = 0.98; ). Similar to observed ASVs, however, diversity decreased over time ( F (1,5) = 11.92; P = 0.02; ). Interestingly, when we look at the dispersion in the number of ASVs around the mean for each treatment, as measured by variance, we note that variance seems to increase in the presence of chlorine. Unfortunately, we do not have enough data points to test for this pattern. Still, this result suggests that water chlorination could result in finer changes while not changing the core microbial communities in the starters. Investigating the presence of integron 1 during fermentation Although we did not observe major changes in community structure or diversity, our previous results suggest that changes in community dynamics could have happened at a finer scale . For example, it is possible that chlorine could exert selective pressures on a resistant strain within a genus or even at the gene level via horizontal gene transfer. For this reason, we investigated the relative abundance of intI1 , the gene encoding for integron class 1. The latter is a genetic mechanism enabling the quick transfer of genes and is almost always associated with bacteria with a spoiling or pathogenic potential and is usually associated with antibiotic resistance . Using quantitative PCR, we found that chlorinated water affected the relative abundance of intI in sourdoughs over time (treatment:time: F (2,18) = 4.17; P = 0.03; ). More specifically, we found that the highest chlorine concentration, that is, 4 ppm, significantly increased the relative abundance of intI1 by day 7 ( t = 2.59; P = 0.02; ). Interestingly, we found no difference in intI1 relative abundance among the different chlorine concentrations at day 4, suggesting that selective pressures for intI1 could not be detected at this point. In other words, a larger proportion of the bacteria detected via qPCR harbored the intI1 gene by the end of the experiment only at the highest chlorine concentration. In contrast, the relative abundance of the gene stayed more or less constant across all other treatments over time.
Using 16S rRNA amplicon sequencing to characterize the sourdough starter microbiomes, we identified a total of 150 unique ASVs with a median of nine ASVs found in each starter . The number of ASVs found in each starter varied greatly from four ASVs found in a starter fermented for 1 day to up to 65 ASVs in a starter fermented for 7 days. In total, ASVs were matched to 82 different bacteria genera , with Latilactobacillus being the most common among our samples, 49.4(39.5)%, followed by Pantoea sp., 27.2(39.0)%, Weissella sp., 9.1(23.2)%, and Pseudomonas sp., 8.7(26.7)%. As taxonomy changed slightly depending on the database and algorithm used , we will present our results using the RDP database with the TaxaID taxonomy identifier thereafter; the latter offered the smallest number of unidentified ASVs.
We first wanted to identify the ideal condition to establish sourdough starters in our facilities. To do so, we established two sourdough starter trials: one under sterile laboratory conditions with filtered air and the second in a kitchen environment with exposure to unfiltered air. When comparing the microbial communities found in the sourdoughs after 7 days of fermentation and using the same feeding procedure, we found a significant divergence in the population structure estimated from the weighted UniFrac distance matrix, based on phylogenetic distance, between the two groups ( F (1,16) = 27.7; R 2 = 0.65; adj- P = 0.002; ). Similarly, we also found a significant difference in population structure when comparing the overall dissimilarity in ASV composition between the two groups ( F (1,23 = 15.57; adj- P = 0.002; ). Although the above study could be influenced by a certain degree of heterogeneity in variance between the two groups, we found similar results even when the number of samples was kept constant between the two groups by subsampling, suggesting that there is a real divergence in population structure. Although we did not find significant differences in the number of observed ASVs ( F (1,15) = 5.08; adj- P = 0.16; ) or evenness measured as the Simpson’s Index ( F (1,25) = 4.16; adj- P = 0.18; ), the main difference we observed between the two trials was that the most common ASV found in the presence of unfiltered air was identified as Latilactobacillus sp. . In contrast, the most common ASV found under clean laboratory conditions was identified as Panteoa sp., a genus not commonly associated with sourdough starters. Although the two groups showed some level of heteroscedasticity ( P = 0.04), we found that the significant difference in community structure between communities exposed to unfiltered air and those exposed to laboratory-filtered air helped after subsampling the latter. Finally, we found that the starters grown in laboratory conditions were significantly less likely to result in communities where lactic acid bacteria made up at least 80% of the total identified ASVs (Fisher’s exact test: P = 0.05). For the above reasons, we decided to investigate the possible effect of water chlorination on sourdoughs exposed to air only.
To test for the possible effect of water chlorination on the microbial communities developing in sourdough starters, we exposed starters to three different chlorination treatments, including a control group not exposed to hypochlorite ion. We found no evidence that water chlorination affected the overall structure of microbial communities in sourdough starters. Overall, the same few ASV dominated the populations by day 7 in all treatments . Using principal component analysis to detect possible changes in community structure measured as phylogenetic distance via Unifrac scores, we found that microbial populations changed significantly over time (ADONIS: F (1,15) = 8.01; R 2 = 0.35; adj- P = 0.004) similarly in all water chlorination treatments (ADONIS: F (2,14) = 0.12; R 2 = 0.02; adj- P >0.99; ). In other words, even a chlorine concentration at a level observed in some of the most chlorinated public water systems, that is, 4 ppm, did not modify the relative abundance of most ASVs and taxa observed in the different samples. Similarly, although the number of predicted ASVs observed in the starters decreased over time ( F (2,5) = 0.10; P = 0.90; ), we found that the number of ASVs did not differ between the water chlorination treatments ( F (2,5 = 0.10; P = 0.90; ). We found that chlorine levels did not affect diversity as measured by Simpson’s index ( D ), a measure that is sensitive to how evenly the ASVs are distributed in the samples ( F (2,5) = 0.02; P = 0.98; ). Similar to observed ASVs, however, diversity decreased over time ( F (1,5) = 11.92; P = 0.02; ). Interestingly, when we look at the dispersion in the number of ASVs around the mean for each treatment, as measured by variance, we note that variance seems to increase in the presence of chlorine. Unfortunately, we do not have enough data points to test for this pattern. Still, this result suggests that water chlorination could result in finer changes while not changing the core microbial communities in the starters.
Although we did not observe major changes in community structure or diversity, our previous results suggest that changes in community dynamics could have happened at a finer scale . For example, it is possible that chlorine could exert selective pressures on a resistant strain within a genus or even at the gene level via horizontal gene transfer. For this reason, we investigated the relative abundance of intI1 , the gene encoding for integron class 1. The latter is a genetic mechanism enabling the quick transfer of genes and is almost always associated with bacteria with a spoiling or pathogenic potential and is usually associated with antibiotic resistance . Using quantitative PCR, we found that chlorinated water affected the relative abundance of intI in sourdoughs over time (treatment:time: F (2,18) = 4.17; P = 0.03; ). More specifically, we found that the highest chlorine concentration, that is, 4 ppm, significantly increased the relative abundance of intI1 by day 7 ( t = 2.59; P = 0.02; ). Interestingly, we found no difference in intI1 relative abundance among the different chlorine concentrations at day 4, suggesting that selective pressures for intI1 could not be detected at this point. In other words, a larger proportion of the bacteria detected via qPCR harbored the intI1 gene by the end of the experiment only at the highest chlorine concentration. In contrast, the relative abundance of the gene stayed more or less constant across all other treatments over time.
Multiple factors, such as flour quality or fermentation time, explain the proper development of sourdough starters . Understanding the factors that influence and shape the development of sourdough starters is not only important for the reliable production of quality sourdough but also can help us shed new light on the cultural importance of bread making. Here, we show how air quality and water chlorination can influence the development of microbial communities found in sourdough starters. We also found that sourdough starters affected by a chemical stressor like free chlorine present in water can be less resilient to the presence of possible food-spoiling bacteria or pathogenic bacteria. More specifically, we found that sourdough starters exposed to unfiltered air developed healthy microbial communities with dominant bacteria taxa most often associated with traditional sourdough fermentation. On the other hand, when sourdough starters were exposed to filtered air in a controlled laboratory setting, the starters were dominated by Panteoa sp., an Enterobacteriaceae which was initially isolated as a plant pathogen and that can also be isolated from human and animal gut as well as spoiled soil and water . Some species of Panteoa are known contaminants of sourdough and can negatively affect fermentation . The fact that Pantoea dominated all of our starter replicates grown independently suggests that the possible contaminant was likely present in the flour we used to establish the starters and that exposure to unfiltered and well-oxygenated air is crucial for the proper development of healthy sourdough starters. Interestingly, the amount of chlorine present did not affect the overall microbial community structure in healthy sourdough starters. Regardless of the chlorine concentration used, the same dominant bacteria taxon, that is, Latilactobacillus sp., was detected in all starters by the end of the experiment. The latter is a lactic acid bacterium commonly identified in sourdough starters and other fermented products . In fact, Latilactobacillus sp. accounted for more than 80% of the total read count in all our sourdough starters, confirming that chlorine did not affect our ability to produce healthy sourdough starters as previously predicted. Our results, however, show that chlorine could affect microbial communities at the gene level. We found that the relative abundance of the gene intI1 increases significantly with chlorine concentration in water. More specifically, the relative abundance of intI1 was significantly higher in starters exposed to the highest free chlorine level by the end of the feeding period. Interestingly, we did not observe this difference after 4 days of feeding the sourdough or between the other concentrations of free chlorine. This result suggests that the effect of free chlorine accumulates over time and is only detectable at higher concentrations found in public water systems. Although the copy number of intI1 was relatively low compared with the total number of 16S rRNA gene copies identified in our study, the presence of a marker associated with antibiotic-resistance genes and spoilage bacteria should be taken seriously. Whether our observation of this gene marker translates into the actual presence of antibiotic-resistant bacteria in sourdough remains to be tested. However, it is known that some bacterial strains associated with sourdough fermentation show intrinsic resistance to antibiotics and that farming systems where raw ingredients were grown can also contribute to the presence of antimicrobial resistance composition . Finally, even if the genomic content of starter cultures does contain a high abundance of antibiotic-resistance genes, how it affects subsequent functionality in fermented food has yet to be determined . In conclusion, our study provides an important proof-of-principle of the possible effect of water chlorination on sourdough starters and contributes to the growing body of literature investigating how environmental variables shape fermented foods. Our findings also suggest that whole-genome sequencing conducted at the population level, sometimes referred to as metagenomic, might be required to fully understand the finer changes in microbial communities impacted by water chlorination and other environmental factors, possibly impacting the desired gastronomic properties of sourdough bread.
Reviewer comments
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.